Ask HN: With all the AI hype, how are software engineers feeling?

44 cpt100 70 8/11/2025, 4:20:30 AM
I'm just wondering what the morale is with AI doing 30-50% of your work? Is your company hiring more/ have they stopped hiring software engineers? Is the management team putting more pressure to get more things done?

Comments (70)

chrisco255 · 1h ago
AI does 0% of my work and we are actively hiring. As someone mentioned on another AI thread, if AI is so good why aren't people just doing 15 PRs a day on open source projects like Node.js, React, Kubernetes, Linux, Ansible, etc?

AI is sometimes a productivity booster for a dev, sometimes not. And it's unpredictable when it will and won't be. It's not great at giving you confidence signals when you should be skeptical of its output.

In any sufficiently complex software project, as much of the development is about domain knowledge, asking the right questions, balancing resources, guarding against risks, interfacing with a team to scope and vet and iterate on a feature, managing resources, analyzing customer feedback, thinking of new features, improving existing features, etc.

When AI is a productivity booster, it's great, but modern software is an evolving, organic product, that requires a team to maintain, expand, improve, etc. As of yet, no AI can take the place of that.

beambot · 1h ago
You don't use any AI - drafting documentation, write boilerplate code, transcribing meetings, simplifying team communications, searching product documentation, alternative to Google or StackOverflow, creating presentations, as a brainstorming partner? I would consider all of these "work".

If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous.

bluefirebrand · 1h ago
> If you say AI does 0% of your work, I'd say you're either a genius, behind the curve or being disingenuous

AI was doing 0% of my work 10 years ago too, why should I be any less effective without it now?

You think I'm behind the curve because I'm not buying into the AI craze?

Ok. What's so important about being on the curve anyways, exactly? My boss won't pay me a single cent more for using AI, so why should I care?

sublinear · 50m ago
Are you saying everyone who isn't barely starting their career is a genius? In the current state of things I'd gladly take mediocre work from a human over slop from an AI.
hoherd · 1m ago
Seriously this. Doing code reviews on LLM created code is so frustrating. If the code was submitted by a junior engineer I could get on zoom with them and educate them, which would make them a better team mate, advance their career goals, and make the world slightly better. With AI created code the review process is a series of tiny struggles to dig up out of the hole the LLM created and get back to baseline code quality, and it'll probably be the same Sisyphean struggle with the next PR.
nobodynowhere · 2h ago
Morale is low because leaders think AI can do that amount of work, but it can’t actually (at least not yet). This both means that they don’t hire enough people to do the work needed, while also “drive by” insulting the intelligence of the people they are overworking.
thaw13579 · 1h ago
This has been my observation as well. To add, I'm seeing leadership and stakeholders use their chats with LLMs to justify claims like "what I'm asking for is incredibly simple according to ChatGPT, and it should be done by end of today." Of course it rarely is, because the prompt is underspecified, the LLM solution is oversimplified, and it lacks context on the complexities of existing codebase, and the team's development & deployment processes.
lazystar · 1h ago
and the LLM probably responded with "You're absolutely right!" to every idea they asked about.
827a · 1h ago
Its a nice productivity and capability boost that feels on the same magnitude as, for example, React. The "dream" of it being able to just take tickets and agentically get a PR up for review is possible for ~5% of tickets. That goes up to ~10% if your organization has no standards at all, including even a self-serving standard like "at least make sure the repository remains useful to future AI usage".

My organization would still hire as many software engineers as we could afford.

- Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

- Using it for exploratory high level research and summarization into unfamiliar repos is pretty nice.

- Very rarely does AI write code that I feel would last a year without needing to be rewritten. That makes it good for things like knocking out a quick script or updating a button color.

- None of them actually follow instructions e.g. in Cursor rules. Its a serious problem. It doesn't matter how many times or where I tell it "one component per file, one component per file", all caps, threaten its children, offer it a cookie, it just does whatever it wants.

torginus · 1h ago
> Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

I wonder if we are going to pay for that, as a society. The number of times I went there, asking some tricky question about a frameowk, and have the actual author or one of the core contributors answer me was astonishing.

scubadude · 1h ago
> Stack Overflow has to be actually dead at this point. There's no reason to go there, or even Google, anymore.

If, like the meme, you just copied from SO without using your brain then yes AI is comparable.

If you appreciate SO for the discussion (peer review) about the answers and contrasting approaches sometimes out of left field, well good luck because AI can't and won't give you that.

jansan · 48m ago
There is some very deep knowledge in SO's comments and some lower rated replies. But I suspect that is not what was making Stackoverflow so popular.
dbetteridge · 2h ago
Tired.

Mostly of having to try and explain to people why having an AI reduce software development workload by 30-50% doesn't reduce headcount or time taken similarly.

Turns out, lots of time is still sunk in talking about the features with PM's, stakeholders, customers etc.

Reducing the amount of time a dev NEEDS to spend doing boilerplate means they have more time to do the things that previously got ignored in a time poor state, like cleaning up tech debt or security checks or accessibility etc etc

bluefirebrand · 57m ago
> having to try and explain to people why having an AI reduce software development workload by 30-50%

I'm tired of having to try and explain that AI isn't remotely reducing my workload by 30-50%, and in fact it often probably slows me down because the stupid AI autocomplete gets in the way with incorrect suggestions and prevents me from getting into any kind of flow

brothrock · 20m ago
AI has drastically changed how I make decisions about code and how I code in general. I get less bogged down with boilerplate code and issues, which makes me more efficient and allows me to enjoy architecting more. Additionally, I have found it extremely helpful in writing lower-level code from scratch rather than relying on plug-and-play libraries with questionable support. For example, why use a SQLite abstraction library when I can use LLMs to interact directly with the C source code? Sure it’s more lines of code, but I control everything. I wouldn’t have had the time before. This has also been extremely helpful in embedded systems and low-level Bluetooth.

In terms of hiring- I co-own a small consultancy. I just hired a sub to help me while on parental leave with some UI work. AI isn’t going to help my team integrate, deploy, or make informed decision while I’m out.

Side note, with a newborn (sleeping on me at this moment), I can make real meaningful edits to my codebase pretty much on my phone. Then review, test, integrate when I have the time. It’s amazing, but I still feel you have to know what you are doing, and I am selective on what tasks, and how to split them up. I also throw away a lot of generated code, same as I throw away a lot of my first iterations, it’s all part of the process.

I think saying “AI is going X% of my work” is the wrong attitude. I’m still doing work when I use AI, it’s just different. That statement kind of assumes you are blindly shipping robot code, which sounds horrible and zero fun.

pram · 1h ago
We are still hiring engineers. Everyone has a paid Cursor sub, and some people use Claude Code. We also have Claude in GitHub doing automatic PRs.

It’s mostly seen as a force multiplier. Our platform is all Java+Spring so obviously the LLMs are particularly effective because it’s so common. It hasn’t really replaced anyone though, also because it’s Java+Spring so most of our platform is an enormous incomprehensible mess lol

o11c · 1h ago
I'm just waiting for the hype-cycle to end. AI might revolutionize some industry (probably natural-language-adjacent), but not ours. COBOL has already been attempted, and far more competently (and with less energy cost).

If people can seriously have an AI do 50% of their work, that's usually a confession that they weren't actually doing real work in the first place. Or, at least, they lacked the basic competence with tools that that any university sophomore should have.

Sometimes, however, it is instead a confession "I previously wasn't allowed to copy the preexisting solutions, but thanks to the magic of copyright laundering, now I can!"

b_e_n_t_o_n · 1h ago
I think of LLMs as essentially translators - taking natural language and translating it into something else. It works great with writing HTML for example. The more declarative and high-level the language is, the better it does. Which makes intuitive sense, the closer the output is to the input the better it does imo.

So generally the people getting the most use out of LLMs are people who are using these higher levels of abstractions. And I imagine we will be building more abstractions like HTML to get more use out of it.

bluefirebrand · 55m ago
> If people can seriously have an AI do 50% of their work, that's usually a confession that they weren't actually doing real work in the first place. Or, at least, they lacked the basic competence with tools that that any university sophomore should have.

Strongly agree here. I am extremely skeptical of anyone reporting this kind of productivity gain.

YZF · 2h ago
- No major change in hiring due to AI.

- A lot of our code base is very specialized and complex, AI still not good enough to replace human judgement/knowledge but can help in various ways.

- Not yet clear (to me anyways) how much of a productivity gain we're getting.

- We've always had more things we want to do than what we could get done. So if we can get more productivity there's plenty of places to use it. But again, not clear that's actually happening in any major way.

I think the jury is still out on this one. Curious what others will say here. My personal opinion is that unless AI gets smart enough to replace more experienced developer completely, and it's far from that, then I'm quite sure there's not going to be less software jobs. If AI gets to a point where it is equal to a good/senior developer we'll have to see. Even then it might be that our jobs will just turn into more managing AI but it's not a zero sum game, we'll do more things. Superintelligence is a different story, i.e. AI that is better than humans in every cognitive aspect.

oaiey · 1h ago
Worried about the next generation who - I think - will not learn normally (incl whatever it does to the brain) and may never reach the degree of engineering capability some of us have.

Tired of leadership who think productivity will raise.

Tired of AI summaries sent around unreflected as meeting minutes / action items. Tired of working and responding on these.

amatecha · 1h ago
Seriously, AI meeting summaries are such shit. I see my name tasked with things I never committed to, or I see conclusions or action items grossly misrepresented, to a degree that any actual person who wrote those would lose their job. Stop using this shit please. What a waste of time and energy.
ai_assisted_dev · 1h ago
I have been in software for 20 years, and was just about to quit 2-3 years ago because of how mundane things became. And now I am actually loving it again because of AI. I'd say, AI writes 95% of my code, and I use it for 75% of the decisions during working on a project.

I am under MUCH more pressure to deliver more in shorter periods of time, with just me involved in several layers of decision making, rather than having a whole team. Which may sound scary, but it pays the bills. At one company I contract with, I now have 2 PMs; where I am the only dev on a production app with users, shipping new features every few days (rather than weeks).

It feels more like performance art, than it even feels like software development at this point. I am still waiting for some of my features to come crashing prod down in fantastic fashion, being paged at 3am in the morning; debugging for 12 hours straight because AI has built such a gigantic footgun for me.... but it has yet to happen. If anything I am doing less work than before - being paid a little more, and the companies working with me have built a true dependency on my skills to both ship, maintain and implement stuff.

moltar · 29m ago
I was thinking of doing something similar. I think I’m well positioned for this as I have a natural ability to juggle many contexts, I used to run a software agency, and I’m pretty good at architecture early on which means solutions come out more robust and flexible. I have had really good experience with AI tools and I’m constantly evolving my workflows.

I’m wondering how did you land your current gigs?

Thank you.

englishrookie · 51m ago
Would you mind sharing your setup (LLM model, IDE, best practices)? Personally, I'm struggling to get value out of Continue.dev in VSCode (using Gemini 2.0 Flash by default, with the option to switch to more advanced models). I still revert to pasting code into ChatGPT chat window (using the website), frequently.

Are you using agentic features, given that you have not just one but two PMs?

kazinator · 2h ago
I feel like I suddenly have a superpower.

I'm wearing glasses that tell me who all the fucking assholes and impostors are.

mirekrusin · 1h ago
Do you mind elaborating on how? It's hard to say if it's sarcasm or you're referring to some genuinely interesting insight.
bluefirebrand · 52m ago
My insight is if you think AI is giving you a 50% performance boost, you're either an imposter or a paid shill
lsb · 1h ago
I used Claude Code to navigate a legacy codebase the other day, and having the ability to ask "how many of these files have helper methods that are duplicated or almost but not quite exactly duplicated?" was very much a superpower.
jansan · 46m ago
Just like refactoring tools felt like a superpower, if you were already around at that time (early 2000s).
caro_kann · 1h ago
My company is still hiring engineers like it was doing before. About the work itself I can say LLMs are good with PoC or new projects, I can't say the same about already existing codebase. For me it's a good tool, but not THE solution. Lately I'm making a lot of AWS Serverless configurations with Cloudformation and LLMs hallucinate a lot for that. At this point, I always verify if it exists in the doc or not, because it spits out stuff that doesn't exist at all.
jgb1984 · 27m ago
I'm not using AI for anything. I read and write my own emails, make my own slides, write my own python code using vim, debian, openbox, bash and tmux, just as I have been for almost 20 years. I don't even use an LSP or autocompletion! Hell, I even read actual books, on paper!

And yes, I did test ChatGPT, claude, cursor, aider... They produce subpar code, riddled with subtle and not so subtle bugs, each of my attempts turned out to be a massive waste of time.

LLM is a plague and I wish it had never showed up, the negative effects on so many aspects of the world are numerous and saddening.

dreckneck · 1h ago
In the practical sense, not much of my work actually changed and my company seems to be hiring the same as before.

In the psychological sense, I'm actually devastated. I'm honestly struggling to be motivated to learn/create new things. I'm always overthinking stuff like:

- "Why would I learn mobile app dev if in the near future there will be an AI making better UIs than me?" - "Why would I write a development blog?" - "Why would I publish an open-source library on GitHub? So that OpenAI can train its LLM on it?" - "Why would I even bother?"

And then, my motivation sharply drops to zero. What I've been up to lately is playing with non-tech related hobbies and considering switching careers...

tom_m · 16m ago
It's a great tool for a programmer...but the external perception isn't great. It can put pressure on people and also lead to undervaluing programmers. Overall it's probably a bad thing. Though it is fun.
horttemppa · 1h ago
I work with rather 'basic' CRUD applications with CMS and user management portals + some integrations to CRM systems etc. There is a lot of legacy stuff and rather bad practices or no general style guidelines followed.

AI helps here and there but honestly the bottleneck for output is not how fast the code is produced. Task priorization, lacking requirements, information silos and similar issues cause a lot of 'non-coding work' for developers (and probably just waiting around for some who don't want to take initiative). Also I think the most time consuming coding task is usually debugging and AI tools don't really excel at that in my experience.

That being said, we are not hiring at the moment but that really doesn't have anything to do with AI.

jraph · 1h ago
Patiently looking forward for the HN front page to be about something else than generative AI.
jansan · 42m ago
I found it amusing when a few days ago the front page was littered with ChatGPT 5 news, and then suddenly, when reactions turned negative, these news entirely disappeared.
tiberius_p · 1h ago
I work in hardware design and verification. I've seen many AI-based EDA tools proposed at conferences but in the team that I'm working now I haven't seen AI being adopted at all. Among the proposed tools that caught my attention: generating SystemVerilog assertions from natural language prompts, generating code fixes from lint errors, generating requirements, vplans and verification metrics from specifications written in natural language, using LLMs inside IDE's as coding agents and chat bots to query the code. I think the hardware industry will be harder to penetrate by AI because hardware companies are more secretive about their HDL code and they go to great lengths to avoid leaks. That's why most of them have an in-house IT infrastructure and they avoid the cloud as much as possible especially when it comes to storing HDL code, running HDL simulations, formal verification tools and synthesis. Even if they were to employ locally hosted AI solutions that would require big investments in expensive GPUs and expensive electricity bills: the industry giants will afford it while the little players won't. The ultimate goal is to tapeout bug-free chips and AI can be a great source of bugs if not properly supervised. So humans are still the main cogs in the machine here. LLMs and coding agents can make our jobs a whole lot easier and pleasant by taking care of the boring tasks and leaving us with the higher level decisions, but they won't replace us any time soon.
picafrost · 1h ago
My organization isn't a pure tech company so not much has changed. Management acknowledges AI's velocity but maintains a healthy skepticism of throwing "AI" into everything as a panacea. Writing the code has rarely been the hard part.
webprofusion · 1h ago
I think currently once you get into the weeds of a project the AI can only really lend a helping hand, rather than do 30-50% of the work.

It can kickstart new projects to get over the blank page syndrome but after that there's still work, either prompting or fixing it yourself.

There are requirements-led approaches where you can try to stay in prompt mode as much as possible (like feeding spec to a junior dev) but there is a point where you just have to do things yourself.

Software development has never been about lines of code, it has always required a lot of back and forth discussion, decisions, digging into company/domain lore to get the background on stuff.

Reviewing AI code, and lots of it, is hard work - it can get stuff wrong when you least expect it ("I'll just stub out this authentication so it returns true and our test passes")

With all that in mind though, as someone who would pay other devs to do work I would be horrified if someone spent a week writing unit tests that I can clearly see an AI would generate in 30 seconds. There are some task that just make sense for AI to do now.

webprofusion · 1h ago
Where it really does open your eyes is when dealing with stuff you just wouldn't have done otherwise: - Can't remember the name of that web tool you used to base64 decode locally, just ask for one. - Would love to have a quick tool that does X: done. - Wouldn't know where to start building a C++ VST plugin for audio processing: done - Point it an a protocol RFC and get it to generate an API implementation stub: done (that one went from "maybe one day", to "shipped" simply because the initial donkey work got done by AI.
0points · 27m ago
> I'm just wondering what the morale is with AI doing 30-50% of your work?

I don't know any developers who use AI to that large extent.

Myself am mostly waiting for the hype to die out so we can have a sober conversation about the future.

its-kostya · 21m ago
Our company is trailing AI tools for developers. I've had good and bad success with them, but my job satisfaction is way low in both cases.
sssilver · 1h ago
It’s like autocomplete on steroids.

When code autocomplete first came out everyone thought software engineering would become 10x more productive.

Then it turned out writing code was only a small part of the complex endeavor of designing, building, and shipping a software system.

brap · 1h ago
I often find myself pissed off that AI can’t properly do even the most trivial, menial coding work. And I have to spend more time guiding it than doing it myself.

On the other hand I find it super useful for debugging. I can paste 500k tokens into Gemini with logs and a chunk of the codebase and ask it what’s wrong, 80% it gets it right.

givemeethekeys · 1h ago
The slowdown in hiring outside of AI is the bigger morale hit.
exfalso · 1h ago
Mostly feeling like a caveman. I've been trying and failing to use it productively since the start of the hype. The amount of time wasted could've been used for actual development.

I just simply don't get it. Productivity delta is literally negative.

I've been asking to do projects where I thought "oh, maybe this project has a chance of getting an AI productivity boost". Nope. Personal projects all failed as well.

I don't get it. I guess I'm getting old. "Grandpa let me write the prompt, you write it like this".

bluefirebrand · 48m ago
No, you're not alone

I find it wastes my time more than it helps

Everyone insists I must be using it wrong

I was never arrogant enough to think I'm a superior coder to many people, but AI code is so bad and the experience using it is so tedious that I'm starting to seriously question the skills of anyone who finds themselves more productive using AI for code instead of writing it themselves

pjmlp · 1h ago
The pressure to do more AI based work is certainly there.

Also from my experiences with agents, and given that I have been around computers since 1986, I can clearly see where the road is going.

Anyone involved with software engineering tasks, should see themselves becoming more of a technical architect for their coding agents, than raw coding, just like nowadays while Assembly is a required skill for some fields, others can code without ever learning anything about it.

Models will eventually become more relevant than specific programming languages, what is worth discussing X or Y is better, if I can generate any that I feel like asking for. If anything newer languages will have even harder time getting adopted, on top of everything that is expected, now they also have to be relevant for AI based workflows.

ealhad · 1h ago
As a software engineer: the only impact the AI bubble has on me is the time it takes to explain what's a stake to less tech-savvy colleagues. Zero consequences on my actual job, excepet being pissed of each time a promising project "pivots to AI" and starts shoehorning it everywhere.

As a person I'm increasingly worried about the consequences of people using it, and of what happens when the bubble bursts.

block_dagger · 1h ago
It’s still exciting times. Productivity up. In two years it will be different.
Netcob · 1h ago
Once in a while I save ~10 minutes by using AI. About as often as embarrassing myself by having to admit that my primary source was an AI while researching some topic.

The main thing that changed is that the CTO is in more of a "move fast, break things"-mood now (minus the insane silicon valley funding) because he can quickly vibe-code a proof-of-concept, so development gets derailed more often.

fcatalan · 1h ago
My org is always a decade behind, so I'm still just ignoring the official push for whatever Oracle low code crap is called.

Hiring is as haphazard and inadequate as it has been in the last 25 years, no change there.

AI usage is personal, widespread and on a don't ask don't tell basis.

I use it a lot to:

- Write bullshit reports that no one ever reads.

- Generate minimal documentation for decade old projects that had none.

- Small, low stakes, low complexity improvements, like when having to update this page that was ugly when someone created it in 1999, I'll plop it on aistudio to give it a basic bootstrap treatment.

- Simple automation that wasn't worth it before: Write me a bash script that does this thing that only comes up twice a year but I always hate.

- A couple times I have tried to come up with more complex greenfield stuff to do things that are needed but management doesn't ever acknowledge, but it always falls apart and starts needing actual work.

Morale is quite crappy, as ever, but since some of the above feels like secretly sticking it to The Man, there are these beautiful moments.

For example when the LLM almost nails your bimonthly performance self report from your chat history, and it takes 10 minutes instead of 2 hours, so you get to quietly look out of the window for a long while, feeling relaxed and smug about pocketing some of the gains from this awesome performance improvement.

8note · 1h ago
its really fun, like learning to code again to see what all can be done, an how much more power is available at your fingertips.

what sucks though is that its super inconsistent whether the thing is gonna throw an error and ruin the flow, whether thats synchronous or async.

rkomorn · 1h ago
I'm trying to use it to do things I've never done before (ie UI stuff when I've mostly been a backend SRE type).

I like that it makes it easy to learn new things by example.

I don't like that I have no idea if what I'm learning is correct (or at least recent / idiomatic), so everything I see that's new, I have to validate against other resources.

I also don't really know if it's any different from "tutorial hell".

greatwhitenorth · 28m ago
In my last company, they've fired all the employees except the CEO. He has a neuralink chip embedded in his brain and vibe codes all day through his brain waves. He even vibe codes during his sleep.

All companies will end up with just one employee. If you don't agree with this, you don't know how to prompt.

prisenco · 1h ago
Biding my time. GPT5 was a wake up call. The hype will die down and the hangover will begin.

Moving fast in the beginning always has caveats.

In the meantime I'm doubling down on math and theory behind AI.

sublinear · 42m ago
Only the most toxic workplaces are still pushing for this since several years ago.

AI is an irrelevant implementation detail, and if the pace of your work is not determined by business needs but rather how quickly you can crank out code, you should probably quit and find a real job somewhere better that isn't run by morons.

dudeinjapan · 1h ago
Cursor Bot on Github feels like a significant step forward, catches tons of stupid mistakes, typos, etc better than 95% of human reviewers can. The days of needing 2 reviewers on a PR are over IMHO, allows human reviewers to focus on broader architectural decisions.
werealldevo · 1h ago
Angry because this is yet another play by the ruling class to make more money, and you, I, and everyone you know is going to pay dearly for it.

Baffled because there are too many rank-and-file tech workers who seem to think AI exciting/useful/interesting. It’s none of those things.

Just ask yourself who wants AI to succeed and what their motivations are. It is certainly not for your benefit.

NL807 · 1h ago
"Meh"
wahnfrieden · 1h ago
Developer operations and architecture haven’t caught up to efficient and productive AI workflows yet. Most orgs don’t have good ways for all their employees to have agents running in parallel and closing the loop within the agent (generating results that the agent can check and iterate on, with the dev being able to jump in easily to inspect). These iterations still require too much manual and bespoke management. So management and devs don’t see the full picture yet on current genetic productivity potential and dismiss it as a wash on time savings.
Lionga · 1h ago
It is now 3 years since I was told AI will replace engineers in 6 month. How come all the AI companies have not replaced engineers?
BrouteMinou · 1h ago
Are you familiar with: "the year of Linux on the desktop" ?

The AI will replace us all in 2028! For real this time.

But before that, all the mid-managers will be replaced first, then the tech writers, the QA people, the PM, the...

The devs are closing the lights behind...

renewiltord · 1h ago
It's fucking sick, dude. My buddy and I have a two-person team pulling contracts you needed a whole team to do before. Fucking love it, mate.
deadbabe · 1h ago
50% of my code these days has been entirely replaced by AI, with little to no review beyond a cursory glance.

That 50% is unit tests.

01HNNWZ0MV43FF · 1h ago
It's not my biggest concern here in the US
ivape · 2h ago
Most companies are going to have to rebuild their business entirely or die. That’s very exciting because I really think this will usher in a new wave of companies/hiring. Everything has to be rebuilt so I really don’t buy the hiring Armageddon.
Lionga · 1h ago
1 to 5% of software can be improved with AI at the current level. For most incumbents will be even more secure as every startup will be forced to put some BS AI into their thing by investooors
b_e_n_t_o_n · 1h ago
I'm really enjoying using Claude Code. There is a learning curve, and you have to set your project up in a way that helps these agents work better, but when you do it's a massive productivity boost with certain stuff. It's generated some decent looking landing pages and other UI stuff that I would have otherwise spent multiple hours on. It can even build some backend services that I would have also spent a couple hours on. This time adds up, which lets me see my family and friends more and that's the most important thing to me.

I don't really see it replacing us in the near future though, it would be almost useless if I wasn't there to guide it, write interfaces it must satisfy, write the tests it uses to validate its work etc. I find that projects become highly modularised, with defined interfaces between everything, so it can just go to work in a folder satisfying tests and interfaces while I work on other stuff. Architecting for the agents seems to lead to better design overall which is a win.

I'm just writing crud apps though, I imagine it's less useful in other domains or in code bases which are older and less designed for agents.

My next experiment is designing a really high level component library to see if it can write dashboards and apps with. It seems to struggle with more interactive UI's as opposed to landing pages.