The rise of async AI programming

98 mooreds 82 9/11/2025, 12:20:19 PM braintrust.dev ↗

Comments (82)

sirwhinesalot · 3h ago
Before I read the article I thought this meant programming with "async".

Just call it Agent-based programming or somesuch, otherwise it's really confusing!

ankrgyl · 3h ago
(Author here) Haha that is a great point. I was trying to come up with a term that described my personal workflow and specifically felt different than vibe coding (because it's geared towards how professional programmers can use agents). Very open to alternative terms!
didibus · 2h ago
I want to understand the distinction you're making against vibe coding.

In vibe coding, the developer specifies only functional requirements (what the software must do) and non-functional requirements (the qualities it must have, like performance, scalability, or security). The AI delivers a complete implementation, and the developer reviews it solely against those behaviors and qualities. Any corrections are given again only in terms of requirements, never code, and the cycle repeats until the software aligns.

But you're trying to coin a term for the following?

In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.

Did I understand it right?

If so, I've most seen the latter be called AI pair-programming or AI-assisted coding. And I'd agree with the other commenters, please DO NOT call it async programming (even if you add async AI it's too confusing).

ankrgyl · 2h ago
> In ??? coding, the developer specifies code changes that must be made, such as adding a feature, modifying an existing function, or removing unused logic. The AI delivers the complete set of changes to the codebase, and the developer reviews it at the code level. Any corrections are given again as updates to the code, and the cycle repeats until the code aligns.

Yes

> If so, I've most seen the latter be called AI pair-programming or AI-assisted coding.

I specifically considered both terms and am not a fan * "pair-programming" is something that involves two people paying attention while writing code, and in this case, i'm not looking at the screen while the AI system writes code * "AI-assisted coding" is generally anchored to copilots/IDE style agents where people are actively writing code, and an AI assists them.

I totally hear you on conflating async. However, I think the appropriate term would clearly indicate that this happens without actively watching the AI write code. Unfortunately I think other terms like "background" may also be confusing for similar reasons.

krapp · 1h ago
I think it's still vibe coding. In practice any AI-driven process where you tell the AI what you want and it writes the code is considered "vibe coding."
drob518 · 2h ago
Exactly. I think the traditional meaning of “asynchronous programming” was coined first. So, let’s stick with that.
SCUSKU · 3h ago
Same! I was hoping this would have some insights into pitfalls or the like with javascript promises or python async, but alas no such luck.
grandiego · 3h ago
Same here. I've read the author's braintrust.dev as "brain - Rust - Dev", so I was expecting a discussion on Rust Async development.
jmull · 4h ago
This vision of AI programming is DOA.

The first step is "define the problem clearly".

This would be incredibly useful for software development, period. A 10x factor, all by itself. Yet it happens infrequently, or, at best, in significantly limited ways.

The main problem, I think, is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.

I guess maybe the context is cranking out REST endpoints or some other constrained detail of a larger thing. Then, sure.

thefourthchime · 4h ago
I disagree with being detailed, many times I want to AI to think of things, half the time it comes up with something I wouldn't have that I like.

The thing I would add is to retry to prompt, don't tell it to fix a mistake. Rewind and change the prompt to tell It not to do that it did.

athrowaway3z · 2h ago
I agree there is a lot of value to have it do, what it considers, the obvious thing.

It is almost by definition what the average programmer would expect to find, so it's valuable as such.

But the moment you want to do something original, you need to keep high-level high-quality documentation somewhere.

Graphon1 · 1h ago
> is that it assumes you already know what you want at the start, and, implicitly, that what you want actually makes some real sense.

My experience is different. I find that AI-powered coding agents drop the barriers to experimentation drastically, so that ... yes if I don't know what I Want, I can go try things very easily, and learn. Exploration just got soooo much cheaper. Now that may be a different interaction that what is described in this blog post. The exploration may be a precursor to what is happening in this blog post. But once I'm done exploring I can define the problem and ask for solutions.

If it's DOA you'd better tell everyone who is currently doing this, that they're not really doing this.

ankrgyl · 3h ago
(Author here) I can certainly appreciate having an alternate perspective, but I think it's unfair to say it's DOA. I've personally used this workflow for the last 6 months and shipped a lot of features into our product, including the lowest levels of infra all the way to UI code. I definitely think there is a lot to improve. But it works, at least for me :)
dec0dedab0de · 3h ago
Figuring out what you want is the hard part about programming. I think that's where AI augmentation will really shine, because it lowers the time between iterations and experiments.

That said, this article is basically describing being a product owner.

datadrivenangel · 7h ago
I did this early in my career as a product owner with an offshore team in India... Write feedback/specs, send them over at end of day US time. Have a fresh build ready for review by start of business.

Worked amazingly when it worked. Really stretched things out when the devs misunderstood us or got confused by our lack of clarity and we had to find time for a call... Also eventually there got to be some gnarly technical debt and things really slowed down.

mcny · 6h ago
I think it can only work if the product owner literally owns the product as in has FULL decision making power about what goes or doesn't go etc. it doesn't work when a product manager is a glorified in between guy, dictating the wishes of the CEO through a game of telephone from the management.
jt2190 · 5h ago
You’ll have to be more specific about what you mean by “product owner”, because that’s a very nebulous job title. For example, how technical is this product owner? Are they assumed to “just know” that they’re asking for an overly complex, expensive technical solution?
balamatom · 4h ago
I'd guess they'd be assumed to "just know" to trust the developers working with them on that?
datadrivenangel · 2h ago
Agreed. A glorified go between person is rarely going to succeed at delivering something good.
ch4s3 · 5h ago
> it can only work if the product owner literally owns the product as in has FULL decision making power

This seems like a fairly rare situation in my experience.

swiftcoder · 5h ago
It's not uncommon in the sort of solo-dev bootstrapped startup that is going wild for AI coding right now though.
lelanthran · 5h ago
This works until you get to the point that your actual programming skills atrophy due to lack of use.

Face it, the only reason you can do a decent review is because of years of hard won lessons, not because you have years of reading code without writing any.

MisterTea · 5h ago
Coding interview of the future: "Show us how you would prompt this binary sort."
joenot443 · 49m ago
My understanding is it's already here [1]

[1] https://news.ycombinator.com/item?id=44723289

Graphon1 · 1h ago
not a joke.

Also, the future you are referring to is... like... 6 weeks from now.

CuriouslyC · 2h ago
You're right, reviews aren't the way forward. We don't do code reviews on compiler output (unless you're writing a compiler). The way forward is strong static and analytic guardrails and stochastic error correction (multiple solutions proposed with LLM as a judge before implementation, multiple code review agents with different personas that have been prompted to be strict/adversarial but not nit-pick) with robust test suites that have also been through multiple passes of audits and red-teaming by agents. You should rarely have to look at the code, it should be a significant escalation event like when you need to coordinate with Apple due to XCode bugs.
lelanthran · 1h ago
> You should rarely have to look at the code, it should be a significant escalation event

This is the bit I am having problems with: if you are rarely looking at the code, you will never have the skills to actually debug that significant escalation event.

dingnuts · 1h ago
good fucking luck writing adequate test suites for qualitative business logic

if it's even possible it will be more work than writing the code manually

ankrgyl · 3h ago
(Author here) Personally, I try to combat this by synchronously working on 1 task and asynchronously working on others. I am not sure it's perfect, but it definitely helps me avoid atrophy.
gobdovan · 4h ago
For generative skills I agree, but for me the real change is in how I read and debug code. After reading so much AI-generated code with subtle mistakes, I can spot errors much quicker even in human-written code. And when I can't, that usually means the code needs a refactor.

I'd compare it to gym work: some exercises work best until they don't, and then you switch to a less effective exercise to get you out of your plateau. Same with code and AI. If you're already good (because of years of hard won lessons), it can push you that extra bit.

But yeah, default to the better exercise and just code yourself, at least on the project's core.

suddenlybananas · 4h ago
What do you mean you can spot errors much quicker?
gobdovan · 4h ago
I mean that I've read so much AI generated code with subtle mistakes that my brain jumps straight to the likely failure point and I've noticed it generalizes. Even when I look at an OSS project I'm not super familiar with, I can usually spot the bugs faster then before. I'll edit my initial response for clarity.
N2yhWNXQN3k9 · 3h ago
> subtle mistakes that my brain jumps straight to the likely failure ... I can usually spot the bugs faster then before

doubt intensifies

gobdovan · 3h ago
Doubt accepted. A spot-the-bug challenge on real OSS/prod code would be fun.
sevensor · 3h ago
What the article describes is:

1. Learn how to describe what you want in an unambiguous dialect of natural language.

2. Submit it to a program that takes a long time to transform that input into a computer language.

3. Review the output for errors.

Sounds like we’ve reinvented compilers. Except they’re really bad and they take forever. Most people don’t have to review the assembly language / bytecode output of their compilers, because we expect them to actually work.

ako · 2h ago
No, it sounds like the work of a product manager, you’re just working with agents rather than with developers.
sarchertech · 1h ago
Product managers never get that right though. In practice it always falls back on the developer to understand the problem and fill in the missing pieces.

In many cases it falls on the developer to talk the PM out of the bad idea and then into a better solution. Agents aren’t equipped to do any of that.

For any non trivial problem, a PM with the same problem and 2 different dev teams will produce a drastically different solutions 99 times out of 100.

ako · 1h ago
Agree with the last bit, dev teams are even more non-deterministic than LLMs.
skydhash · 1h ago
Is it the work of a product manager? I believe the latter only specify features and business rules (and maybe some other specifications like UX and performance). But no technical details at all. That would be like an architect reviewing the brand of nails used in an house framing.
Graphon1 · 1h ago
Tech Lead, not PM. (in my experience)
NooneAtAll3 · 3h ago
so... normal team lead -> manager pipeline?
lenerdenator · 5h ago
Agreed.

> Hand it off. Delegate the implementation to an AI agent, a teammate, or even your future self with comprehensive notes.

The AI agent just feels like a way to create tech debt on a massive scale while not being able to identify it as tech debt.

CuriouslyC · 2h ago
I have a static analysis and refactoring tool that does wonders to identify duplication and poor architecture patterns and provide a roadmap for agents to fix the issues. It's like magic, just point it at your codebase then tell the agent to grind away at the output (making sure to come up for air and rerun tests regularly) and it'll go for hours.
lenerdenator · 39m ago
What's it called?
CuriouslyC · 18m ago
Official release is tomorrow, I'm just doing final release prep cleanup and getting the product page in order, but the crate/brew formula is in decent shape, just missing some features I'll be shipping soon. https://github.com/sibyllinesoft/valknut if you want to jump the line though.
segfaultex · 4h ago
This is what a lot of business leaders miss.

The benefits you might gain from LLMs is that you are able to discern good output from bad.

Once that's lost, the output of these tools becomes a complete gamble.

bpt3 · 4h ago
The business leaders already can't discern good from bad.
UncleOxidant · 5h ago
That's not the kind of async programming I was expecting.
stahorn · 4h ago
I was ready for a deep-dive into things like asyncio in python; where it came from and what problems it promised to solve!
ankrgyl · 3h ago
(Author here)

Hi everyone, thanks for the spirited debate! I think there are some great points in the discussion so far. Some thoughts:

* "This didn't work for offshoring, why will it work all of a sudden?" I think there are good lessons to draw from offshoring around problem definition and what-not but the key difference is the iteration speed. Agents allow you to review stuff much faster, and you can look at smaller pieces of incremental work.

* "I thought this would be about async primitives in python, etc" Whoops sorry, I can understand how the name is confusing/ambiguous! The use of "async" here refers to the fact that I'm not synchronously looking at an IDE while writing code all the time.

* "You can only do this because you used to handwrite code". I don't think this workflow is a replacement for handwriting code. I still love doing that. This workflow just helps me do more.

datadrivenangel · 2h ago
I do think that AI will work well compared to the low end of offshoring, where to get good results you need people who could do the work themselves tightly involved. AI will give you slop code faster and cheaper, and that is sometimes enough.

The question is how it compares to the medium level of offshoring. Near term I think that at comparable cost ($100s of dollars per week), it'll give faster results at an acceptable tradeoff in quality for most uses. I don't think most companies want to spend thousands of dollars a month on developer tools per developer though... even though they often do.

ankrgyl · 1h ago
It's just a different workflow IMO. AI is effectively real-time, whereas offshoring, no matter the quality, is something you have to do in batches.
Graphon1 · 1h ago
I don't know why we need a term like "Async AI programming." this is literally what you would do if you were a Tech Lead directing a team of other developers. You define what you want and hand it to one of your devs.

This is just being a TL. the agent is an assistant or a member of the team. I don't know why we need to call it "Async AI programming", unless we want to shy away from or obscure the idea that the agent is actually performing the job a human used to perform.

skeezyboy · 5h ago
He says define the problem like its the easy part. If we had full specs, life would be a lot easier.
arethuza · 5h ago
They don't really as once the spec gets detailed enough it becomes so large and unwieldy that nobody with any actual power reads the things.

An executive at a large company once told me about something where a spec had been written and reviewed by all relevant stakeholders: "That may be what I asked for, but its not what I want."

ge96 · 2h ago
Idk if I'm a luddite or what

I actually like writing code, it does get tedious I get that when you're making yet another component. I don't feel joy when you just will a bunch of code into existence with words. It's like actively participating in development when typing. Which yeah people use libraries/frameworks/boilerplate.

My dream is to not be employed in software and do it for fun (or work on something I actually care about)

Even if I wrote some piece of crap, it is my piece of crap

krapp · 1h ago
Some people will probably call you a luddite, but don't listen to them. There's nothing wrong with taking joy in the craft, with learning and exploring and creating. That's what hacker culture used to be about.

Unfortunately, you won't be able to get a job in software with anything but AI skills, since humans no longer write software in the industry. People will look at you the way they used to look at anyone who wrote their own HTML or Javascript without frameworks and Typescript, like you must drive your car to work with your feet.

ge96 · 43m ago
It was funny I was handed this project to work on and I skimmed the ReadMe, there were a lot of readmes in the code like how to use pipenv or whatever basic stuff... at first I was like "nice job with the docs" but then I later realized it was a vibe coded project I felt like I was owned. It's funny

Also funny how much time was wasted since it had random code in it that was not removed (not working old code vs. current working new code) that's not to blame on the AI part but yeah.

I have a job now in the industry it's funny I work with AI eg. AWS Bedrock/Knowledgebases/Agents... RAG/LLM AI.

The AI I want to work with is vision/ML (robotics) but don't have the background for that (I do it as a hobby instead).

I'm feeling the effect of vibe coding now, where the 2nd leader in our team was only recently a developer but uses ChatGPT/windsurf to code for him which enables him to work on random topics like OpenSearch one day Airflow the next... idk I get I'm the one being left behind by not doing it too but I also want to really learn/understand something. You can do that with an AI-assisted thing but yeah... idk I don't want to that's what I'm saying, I will get out eventually once I've saved enough money.

My learning process for a while has been watching YT crash courses/reading the docs/finding articles...

The project I mentioned above there was literally a prompt in the repo "Write me an event-driven app with this architecture..."

The 2nd leader I mentioned above is a code at work/not at home type of person which is fine but yeah. I'm not that person, I like to actually code/make stuff outside of work. It's not just about getting a task done/shipping some code for me. But I guess that's what a business is, churn out something.

Idk there's some validity there isn't there... "I've been a developer for 10 years, then a guy with 2 years comes in vibe coding stuff" is the leader. Which I'm past it, I don't do office politics anymore, I've got a six-fig job, no need to climb, I'm coasting. Debt is really the only problem I have.

septune · 20m ago
i was reading this article while my laptop was auto-vscoding
lolive · 3h ago
I am teaching asynchronous programming in typescript to junior developpers. And i find really tricky to tell them that async and await do MAJOR magic behind their back to make their code readable as synchronous code.

And then, I need to detail very precisely what "Promise.all()" (and "return") really mean in the context of async/await. Which is something that (I feel) could have been abstracted away during the async/await syntax definition, and make the full magic much more natural.

swid · 3h ago
Async/await themselves are not that much magic really, it's a bit of syntactic sugar over promise chains. Of course, understanding promises is its own bag.

ChatGPT explanation: https://chatgpt.com/share/68c30421-be3c-8011-8431-8f3385a654...

lolive · 3h ago
During my interviews, may be I should ask them to read and understand this:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

prior to any dev they plan to do in JS/TS.

PS: 10 bucks that none of them would stay.

lolive · 3h ago
That reminds me of my Unix guru of the 90s: "man pages ARE easy to read".

[spoil: "when you are already an expert of the tool detailled in it"]

lolive · 3h ago
To elaborate a bit, telling them that you should not "aList.foreach(asyncMethod)", but you'd better do "Promise.all(aList.map(asyncMethod))" is NOT very easy for them.
uncletaco · 3h ago
Man are you going to be disappointed when you read the article.
lolive · 3h ago
Man, for the first time in HN, I am teased to actually read the article.

Update: oh my god, I read the article. And feel completely cheated!!!!

Note for my future self: continue to read only the HN comments

hackingonempty · 4h ago
Sounds great in principle but I have been trained to value individuals and interactions over processes and tools and working software over comprehensive documentation.
antimoan · 4h ago
I think there is a confusion here between Coding and Programming. I think what is described here as "Async Programming" is just programming the way it should be which is different than coding. This is what Leslie Lamport pointed out a while back [1] and recently [2]. According to him programming has 3 stages:

  1- Define what task the program should perform
  2- Define how the program should do it
  3- Writing the code that does it.
Most SWEs usually skip to step 3 instead of going through 1 and 2 without giving it much thought, and implement their code iteratively. I think Step 3 also includes testing, review, etc.

With AI developers are forced to think about the functionality and the specs of their code to pass it to AI to do the job and can no longer just jump to step 3. For delegating to other devs, the same process is required, senior engineers usually create design docs and pass it to junior engineers.

IMO automated verification and code reviews are already part of many developers workflows, so it's nothing new.

I get the point of the article though, that there are new requirements for programming and things are different in terms of how folks approach programming. So I do not agree that the method is new or should be called "async", it's the same method with brand new tools.

  [1] https://www.youtube.com/watch?v=-4Yp3j_jk8Q
  [2] https://www.youtube.com/watch?v=uyLy7Fu4FB4
anyg · 3h ago
Off topic, but I assume the name braintrust comes from Creativity, Inc. Amazing book by Pixar co-founder Edwin Catmull.
dboreham · 5h ago
Those of us who worked in hardware, or are old programmers will find this familiar. Chip/board routing jobs that took days to complete. Product build/test jobs that took hours to run.

See also that movie with Johnny Depp where AI takes over the world.

kylereeve · 3h ago
When this bubble finally pops, someone is going to have to clean up all the nonsense AI code out there.
kordlessagain · 1h ago
Dream on!
conorbergin · 5h ago
Redefining commonly understood phrases to mean something else in your own little world make you look ignorant.
titzer · 4h ago
Indeed! Why not just call it "asynchronous software development" or something similar? "asynchronous programming" is a bad choice, partly because it will be un-googleable.
keybored · 1h ago
The intent is more likely clickbait.

    What I do is is I tell the computer to do something and wait until is done
Not that catchy (even in fewer words).
keybored · 1h ago
Oh, async?

> This version of "async programming" is different from the classic definition. It's about how developers approach building software.

Oh async=you wait until it is done. How interesting.

suddenlybananas · 4h ago
This kind of workflow really doesn't appeal to me in the slightest. Maybe it works for some people, but it just seems to drain all the pleasure out of programming. For me, at least, solving the little problems are like little satisfying puzzles which makes it easier to maintain motivation.
bpt3 · 4h ago
It takes my longer to thoroughly review code I didn't write, especially code written by a junior developer.

Why would I choose to slow myself down in the short term and allow my skills to atrophy in the long term (which will also slow me down)?

webstrand · 3h ago
I actually enjoy writing code... most of the time. I find myself turning to AI to write code I have an aversion to writing, not as a substitute for my own practice, but to get code that I would not have written in the first place. Like benchmarks, bash scripts, dashboards, unit tests, etc.

I can live without these things, but they're nice to have without expending the effort to figure out all the boilerplate necessary for solving very simple problems at their core. Sometimes AI can't get all the way to a solution, but usually it sets up enough of the boilerplate that only the fun part remains, and that's easy enough to do.

bpt3 · 1h ago
That sounds reasonable and similar to how I use it.

Managing a team of interns isn't fun, and I have no idea why someone who is a competent developer would choose to do that to themselves.

snozolli · 3h ago
Effective async programming specs read like technical documentation

The thing I like least about software engineering will now become the primary task. It's a sad future for me, but maybe a great one for some different personality type.

lazyfanatic42 · 5h ago
Awful font.