This is one of the main reasons I got out, and AI is just making it worse by using ambiguous language to describe a solution.
Unlike the free market, I have no interest in contributing to the vast pile of shit software that already exists.
pferde · 2h ago
We're doomed, when even supposed experts, working on these "AI" tools can't help but anthropomorphize those tools.
"when edge cases emerge that the AI didn't anticipate"
The only "anticipation" that is happening within those tools is on token level, the tools have no idea (and are fundamentally unable to even have an idea) what the code is even supposed to do in the real world.
exitb · 1h ago
So if you query the model to implement a function and it handles a case YOU didn't anticipate, what actually happened there? What word would you use?
sindriava · 1h ago
I find it curious you deem yourself capable of stating with such certainty how the models work or what they are fundamentally capable of.
jennyholzer · 2h ago
you're only doomed because you can't distinguish a salesman from an expert
No comments yet
mattacular · 2h ago
> What really struck me was how Amazon's product decisions were driven by internal KPIs rather than user empathy.
First day on the job or what?
Herring · 1h ago
It’s an ancient lesson that has to be relearned over and over.
Thanks for the rabbit hole, the Perry Index is my new thing for the next few months I guess
GuB-42 · 1h ago
> We're the last generation of people who translate ideas into code by hand. Our children will describe what they want and watch it appear on screen, the same way we describe what we want to search engines and watch results appear.
Describing what you want is programming! Code is great for that because it is more precise and less ambiguous than natural languages.
The part about search engines is missing a key element. When you do a search engine search or query a LLM, you interact with the system, using your own brain to interpret and refine the results. The idea with programming is having the machine do it all by itself with no supervision, that's why you need to be extra precise.
It is not so different from having a tradesman build you something. If you have a good idea of what you want and don't want to supervise him constantly, you have to be really precise with your request, casual language is not enough. You need technical terms and blueprints, in the same way that programmers have their programming language.
No comments yet
roywiggins · 1h ago
> These are people who believe deeply that understanding code at a fundamental level is non-negotiable. They can spot inefficient algorithms, they know why certain design patterns exist, they understand the underlying systems well enough to debug problems that AI tools can't handle.
What about the people who just want to have a pretty good idea of what the actual code is doing? Like, at a highish level, such as "reading some Typescript and understanding roughly how React/Redux will execute it." Not assembly, not algorithms development, but just nuts and bolts "how does data flow through this system." AI is great at making a good stab at common cases, but if you never sit down and poke through the actual code you are at best divining from shadows on the cave wall (yes it's shadows all the way down, but AI is so leaky that it can't really be considered an abstraction).
Just the other day I had GPT 4o spit out extremely plausible looking SQL that "faked" the join condition because two tables didn't have a simple foreign key relationship, so it wrote out
select * from table_a
JOIN
table_b ON table_b.name = 'foo'
Perfectly legal SQL that is quietly doing something that was entirely nonsensical. Nobody would intentionally write a JOIN... ON clause like this, and in context an outer join made no sense at all, but buried in several CTEs it took a nonzero amount of time to spot.
germandiago · 1h ago
Poor the people/assistants that make mistakes when they have to fit, intrgrate and modify all AI-written code.
My experience is always that there is a complexity threshold at which things start to take longer, not shorter, with the use of AI. This is not one-off scripts or small programs. But when you have systems that touch a lot of context, different languages and parts of the stack, IA sucks for how to design that code except for probably some advice or ideas. And even there it does not always get it right.
Give any AI any atypical problem and you will see it spit big hallucinations. Take the easy ones and then, yes you are faster, but those can and have been done 1 million times. It is just a bounded (in complexity) accelerator for your typical, written many times code. Give it assembly to optimize, SIMD or something with little documentation and you will see how it performs. Bad.
It is the tool for one-off scripts, scaffolding and small apps. Beyond that, it falls short.
It is like a very fast start with a lot of tech debt added for later.
varjag · 2h ago
I liked the author's chocolate analogy although not in the way they intended. Yes it's a known reproducible tech and yes Hershey dominates there. Hershey is also the worst crap you can buy for the money.
orochimaaru · 1h ago
In a way the author is right. AI apps will be like hersheys - uninspiring junk. Personally I don’t eat hersheys. If I need chocolate I have good Swiss or Belgian brands that are much better.
Coming back to software - I believe the author is correct. We will be able to standardize prompts that can create secure deployments and performant applications. We will have agents that can monitor these and deal with 95% of the issues that happen. The other 5% I have no clue. Most of what industry does today needs standarized architecture based on specs anyway. Human innovation via resume driven design generally overcomplicates things.
ChrisMarshallNY · 2h ago
I don’t disagree with the article, but it’s very much from the perspective of a modern “move fast and break things” PoV.
That’s how Hershey Kisses are made.
I’ve always been more of a Lindt kind of person. Not top of the heap (around here, the current kick is “Dubai Chocolate,” with $20 chocolate bars), but better than average.
I try to move quickly, and not break anything. It does work, but it’s more effort, takes longer, and is more expensive (which is mainly why “move fast and break things” is so popular).
I’m looking forward to “Artisanal” agents, that create better stuff, but won’t have a free tier, and will require experienced drivers.
komali2 · 1h ago
I thought the same thing. That was the aha moment for me that explained the past few months of confusing back and forth about whether AI code is any good.
Apparently, hundreds of millions, maybe billions, of people like Hershey's chocolate (I believe there's a difference in the American version and the Asian/European version, all are bad but the American is beyond sickly sweet and awful). Fine. I will try not to judge, but my god is Hershey's chocolate just awful. I wish I could share a proper dark with every one of those people and tell them to let it melt rather than chew it, to see how amazing chocolate can be (I make it by hand from Pingtung beans that I roast and shell myself, but you can get excellent "bean to bar" chocolate in every major city these days). I wish I could share a cup of proper cappucino from beans roasted that day with everyone that gets a daily Starbucks. I wish I could share a glass of Taihu with everyone that ends the day slamming a couple Buds or Coor's.
But, I guess because it's cheap, or easy, or just because it's what they're used to and now they actually like it, people choose what to me is so terrible as to be almost inedible.
I guess I'm like a spoiled aristocrat or something, scoffing at the peasants for their simple pleasures, but I used to be a broke student, and when my classmates were dropping 5$ a meal on the garbage dining hall burgers, I was making simple one pot paella-like dishes with like 1$ of ingredients, because that tasted better and was healthier, so, I don't know.
Anyway, vibecoded apps probably are bad, but they're bad in the way a Hershey's bar is bad: bad enough to build a global empire powerful enough to rival a Pharaoh, so powerful that it convinces billions that their product is actually good.
sarchertech · 2h ago
The author works for a company that builds an “AI ad maker”. And they link to it in the first paragraph.
noodletheworld · 1h ago
It would be fresh, just now and then to see a passionate AI bro who isn’t selling AI.
You know, I get it, earn those clicks. Spun that hype. Pump that valuation.
Now, go watch people on YouTube like Armin Ronacher (just search, you’ll find him), actually streaming their entire coding practice.
This is what expert LLM usage actually looks like.
People with six terminal running Claude are a lovely bedtime story, but please, if you’re doing it, do me a favour and do some live streams of your awesomeness.
I’d really love to see it.
…but so far, the live coding sessions showing people doing this “everyday 50x engineer” practice don’t seem to exist, and that makes me a bit skeptical.
oytis · 1h ago
Not necessarily a reason to dismiss them. If they believe in the bright AI future, it's only natural to work for an AI company.
sarchertech · 1h ago
1. Working for an AI company is different from actively selling an AI company.
2. An AI ad generator is one of the worst possible uses of AI I can think of.
roywiggins · 1h ago
I read a human interest article where someone said they were working on an AI for automatically turning medical-speak into something sick kids could understand, which currently takes top spot on my ranking for worst AI idea. Fobbing off sick kids onto a chatbot to teach them about their condition, what could go wrong?
People who think this would work and want to make it happen walk among us.
sarchertech · 59m ago
My wife is a pediatric ER doctor, I don’t even need to ask to know she’d absolutely hate it. Pediatric doctors and nurses don’t use medical speak when talking to kids, so that’s just such a stupid product.
jennyholzer · 2h ago
"AI ad maker" sounds like a toilet that produces its own shit
xandrius · 1h ago
Anyone up for creating an AI ad watcher?
desbo · 1h ago
Thank you.
bryanlarsen · 1h ago
I've got a decade or so of professional assembly language programming experience. [1] It's a useful skill even when not programming in assembly. An assembly language programmer generally understands the machine better than most, and very occasionally the ability to inspect compiler output is useful.
This transition really feels like that. If the metaphor holds (and who knows if it will).
1: the transition will take longer than people expect. I was programming assembler well into the 90's. AI is at the level compilers were in the 50's, where pretty much everybody had to understand assembler.
2: the ability to understand code rather than the spec documents AI work from is valuable and required, but will be required in smaller numbers than most of us expect. Coding experience helps make better spec sheets, but the other skills the original post espouses are also valuable, if not more so. And many of us have let those skills atrophy.
[1] 10 years is questionable. Is being paid $100 for a video game with ~100 hours of work put into it professional work? I have about 3 years of work doing assembly for an actual salary.
zelphirkalt · 1h ago
However true or false the vision described in the article may be, the crux of the matter is, that the dev described in the article depends on a third party to even be able to do their job. That third party can change or disappear quickly, or ramp up the costs to a level that will be unacceptable to pay for.
To me it looks like a rather bleak outlook on the future, if we all are supposed to work like that.
photonthug · 1h ago
> There are two camps emerging, and the difference isn't really about skill level or experience.
Assuming everything else the author believes is true, the real camps are "money" and "less money". Those camps already determine the success of businesses to a large extent. But especially in SWE where we traditionally cared less about degrees and more about skill, it's a new thing that "skill" and "experience" are directly cash related, something you can buy and out-source.
Looking for work and need a better github portfolio? Just up your Claude spend. Find yourself needing a promotion at work and in possession of some disposable income? Just pay out of pocket for the AI instead of expecting overtime from your employer or working nights and weekends, because you know you'll make up the difference when you're in charge of your department.
There is some historical precedent for this sort of thing; just read up on the buying and selling of army commissions. That works as well as you might expect, because when "expertise" is purchased like this it turns out that the officers you get are incompetent, and they mostly just fed soldiers into a meat-grinder. https://en.wikipedia.org/wiki/Purchase_of_commissions_in_the...
ben_w · 2h ago
> They'll judge us the way we judge people who manually calculated ledgers before spreadsheets existed. Impressive dedication to craft, but ultimately unnecessary effort spent on problems that got solved by better tools.
One of my dad's anecdotes was of someone who was very proud of the fact that they could multiply numbers with a slide rule faster than any of the then-newfangled electronic hand calculators.
A lot of stuff changed between him being born in 1939 and when he took early retirement in the late 90s.
Kinda weird that it's possible he might be one of the first generation programmers and I might be one of the last.
Such rapid change.
germandiago · 1h ago
We are not at that point. There is a lot to improve in terms of contrxt and specialization.
I really think AI is tremendously over-hyped and AGI is just selling and making money for the people who believe it is even possible.
These tools are probabilistic parrots and the proof is that when you give them something for which not much documentation exist they start to hallucinate a lot.
watwut · 2h ago
> One of my dad's anecdotes was of someone who was very proud of the fact that they could multiply numbers with a slide rule faster than any of the then-newfangled electronic hand calculators.
I am 100% willing to admin that the guy was cool and had good reason to be proud about that.
roxolotl · 1h ago
The point being made that code is cheap, anything is replicable and that what actually matters is building human centered useful software is a very good point. However, It has nothing to do with today. This has always been true. It has always been the case that outside of the most complex applications you can rip off almost any piece of software in significantly less time than it took to build.
mostlyk · 1h ago
I am still yet able to find an LLM which can understand it has to look for files and change config after an error, they have no understanding of when to comment things out and work. To be fair it can make websites, algorithms , copy things nicely but that's about it.
Another tool in the box. import pdb is my way still
merlincorey · 1h ago
Delegation is not the same as abstraction, to my understanding.
joerter10 · 1h ago
Exactly. The 'LLM coding is just like higher-level languages' analogy misses this crucial distinction.
atomtamadas · 1h ago
Until there's no AI that can either directly generate machine code or generate hardware itself we can be sure that AI has a shallow intelligence: it simply learned to imitate the work of developers by learning from countless code examples and then guessing the most likely correct answer.
At least this is good material to imagine some funny future sci-fi scenarios like compiler developers optimizing for AI generated code similarly to how hardware developers sometimes optimize for code generated by some dominant compiler's output. In the far future anthropologists discovering dead programming languages inside long untouched AI generative pipelines and trying to decipher them :)
rckt · 1h ago
Simply BS. I recently struggled making the Stripe docs AI agent to give me a working answer, it wasn't event about the code, but about searching through the documentation.
And who's gonna build the new stuff and not just spit out interpretations based on the already working examples? Man, the AI promoters are something.
quantiq · 2h ago
> The only bottleneck is mdoel (sic) speed and quality. But with billions of dollars pouring into generative AI every year, we will see instant voice-to-code capabilities + bug-free quality in 2-5 years.
Anytime I think the AI bubble can't go any higher I'm reminded of the fact there are people who genuinely believe this. These are the people boardrooms are listening to despite the evidence for actual developer productivity savings being dubious at best: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
jennyholzer · 1h ago
What are these people going to do when the bubble pops?
What happens when the money goes away and they realize they've been duped into joining a cult?
Traubenfuchs · 1h ago
I‘d really like to watch those 4 claude windows at once 100x developers in action.
For me, claude creates plenty of bugs and hallucinations with just one.
Anything besides extremely simple things or extremely overprompted commands comes out 100% broken and wrong.
surgical_fire · 49m ago
Eat meat, said the butcher.
yapyap · 2h ago
It’s fun to get swept away in the worldview of AI hyperenthoudiasts in the same way it’s fun to get sucked into the world of Percy Jackson when reading the books.
It is loosely based on reality and not in line with reality.
drivingmenuts · 1h ago
If I wanted to herd toddlers, I'd have had children.
j1000 · 1h ago
6 split screen Claude CLIs? Bro will be out of hair pretty soon
bgwalter · 1h ago
The question isn't whether this future arrives. Looking at the money and talent flowing into AI development, it's inevitable. The question is whether you'll be ready when it does, and whether you'll be working on the parts of product development that actually matter in that world.
And there it is: inevitable. The whole article is written in a pseudo-religious manner, probably with the help of "AI" to collate all known talking points.
I think the author is not working on anything that matters. His company is one of a million similar companies that ride the hype wave.
What matters is real software written before 2023 without "AI", which is now stolen and repackaged.
thiago_fm · 1h ago
I tried the Claude Code thing and the generated code is utter garbage.
Also, it fails to iterate on complex features. If you are just creating CRUDs, it may work. But even on CRUD scenarios I've seen it completely lose context and use wrong values and things go broke in ways that are hard to track or update.
I'm surprised people work with it and say those things. Are they really using the same tool I use?
I'm sure the problem isn't my prompting, because I've tried watching many videos of people doing the same, and I see the same issues as I've said.
davydm · 2h ago
FTA: "But with billions of dollars pouring into generative AI every year, we will see instant voice-to-code capabilities + bug-free quality in 2-5 years."
HA HA HA HA HA HA HA HA HA HA HA HA
omg, thanks for the laugh - "bug-free quality in 2-5 years" pfffffft I'm not holding my breath - rather, I think that by then, the hype will have finally lost some steam as companies crash and burn with their shitty, "almost working" codebases.
Rzor · 2h ago
I'm in the same camp as you where I don't think the hype is justified, but when it comes to the medium and long term adoption I can easily see LLMs getting good enough so that programmers will be more like plumbers than system designers and maintainers. At the very least it's going to be hard to justify big teams.
I do wonder if tech is ready for more competition then because starving is a hell of a motivator.
joerter10 · 1h ago
I tend to agree. Unless there is some kind of major breakthrough, I see a heavy asymptotic curve in generative AI code quality
MaxLeiter · 2h ago
Think about how much progress has been made in the last 2-5 years. I can understand skepticism but not the HA HA HAs
marginalia_nu · 1h ago
Ha-has are perhaps tonally inappropriate, but when you look at the facts it seems unlikely. What we've seen in the last few years is fairly unlikely to continue forever. That's rarely how trends go. If anything if we actually look at the trend lines the improvements between model generations are becoming smaller, and the models are getting larger and more expensive to train.
A perhaps bigger concern is how flimsy the industry itself is. When investors start asking where their returns are at, it's not going to be pretty. The likes of OpenAI and Anthropic are deep in the red, absolutely hemorrhaging money, and they're especially exposed since a big part of their income is from API-deals with VC-funded startups that in turn also have scarlet balance sheets.
Unless we have another miraculous breakthrough that makes these models drastically cheaper to train and operate, or we see massive increases in adoption from people willing to accept significantly higher subscription fees, I just don't see how this is going to end the way the AI optimists think it will.
We're likely looking at something similar to the dot com bubble. It's not that the technology isn't viable or that it's not going to make big waves eventually, it's just that the world needs to catch up to it. Everything people were dreaming of during the dot com bubble did eventually come true, just 15 years later when the logistics had caught up, smartphones had been invented, and the web wasn't just for nerds anymore.
oytis · 1h ago
> Unless we have another miraculous breakthrough
I guess the argument of AI optimists is that these breakthroughs are likely to happen given the recent history. Deep learning was rediscovered like, what, 15 years ago? "Attention is all you need" is 8 years old. So it's easy to assume that something is boiling deep down that will show impressive results 5-10 years down the line.
marginalia_nu · 1h ago
Scientific breakthroughs happen, but they're notoriously difficult to make happen on command or on a schedule. Taking them for granted or as inevitable seems quite detached from reality.
oytis · 1h ago
True, but given how many breakthroughs we had in AI recently, for text, sound, images and video the odds of new breakthroughs happening are probably higher than otherwise.
We have no idea how many of them we need till AGI or at least replacing software engineers though.
marginalia_nu · 1h ago
That's mostly just a few discoveries finding multiple applications. That's fairly common after a large breakthrough, and what you see is typically a flurry of activity and then things die down as the breakthrough gets figured out.
zwnow · 2h ago
When there's something new and shiny progress is made fast until we reach the inevitable ceiling. AI has unsolved issues. The bubble will eventually pop and the damages will be astounding. I will sign the HA HA HAs. People are delusional.
sindriava · 1h ago
Could you describe in what way do you find the current paradigms "new" or what unsolved issues you're talking about?
zwnow · 1h ago
Currently scaling limitations where gain outweighs the cost efficiency. True long term reasoning. Hallucinations and pattern matched reasoning over structural reasoning. Reasoning for novel tasks. Hitting a data wall so lack of training data. Stale knowledge. Biased knowledge. Oh and let's not forget about all the security related issues nobody likes to talk about.
sindriava · 1h ago
So just to be clear when you mention "AI" you're mostly talking about LLMs right? Since most of these don't apply to expert systems.
jennyholzer · 2h ago
HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA
This is one of the main reasons I got out, and AI is just making it worse by using ambiguous language to describe a solution.
Unlike the free market, I have no interest in contributing to the vast pile of shit software that already exists.
"when edge cases emerge that the AI didn't anticipate"
The only "anticipation" that is happening within those tools is on token level, the tools have no idea (and are fundamentally unable to even have an idea) what the code is even supposed to do in the real world.
No comments yet
First day on the job or what?
https://en.wikipedia.org/wiki/The_Woodcutter_and_the_Trees
Describing what you want is programming! Code is great for that because it is more precise and less ambiguous than natural languages.
The part about search engines is missing a key element. When you do a search engine search or query a LLM, you interact with the system, using your own brain to interpret and refine the results. The idea with programming is having the machine do it all by itself with no supervision, that's why you need to be extra precise.
It is not so different from having a tradesman build you something. If you have a good idea of what you want and don't want to supervise him constantly, you have to be really precise with your request, casual language is not enough. You need technical terms and blueprints, in the same way that programmers have their programming language.
No comments yet
What about the people who just want to have a pretty good idea of what the actual code is doing? Like, at a highish level, such as "reading some Typescript and understanding roughly how React/Redux will execute it." Not assembly, not algorithms development, but just nuts and bolts "how does data flow through this system." AI is great at making a good stab at common cases, but if you never sit down and poke through the actual code you are at best divining from shadows on the cave wall (yes it's shadows all the way down, but AI is so leaky that it can't really be considered an abstraction).
Just the other day I had GPT 4o spit out extremely plausible looking SQL that "faked" the join condition because two tables didn't have a simple foreign key relationship, so it wrote out
Perfectly legal SQL that is quietly doing something that was entirely nonsensical. Nobody would intentionally write a JOIN... ON clause like this, and in context an outer join made no sense at all, but buried in several CTEs it took a nonzero amount of time to spot.My experience is always that there is a complexity threshold at which things start to take longer, not shorter, with the use of AI. This is not one-off scripts or small programs. But when you have systems that touch a lot of context, different languages and parts of the stack, IA sucks for how to design that code except for probably some advice or ideas. And even there it does not always get it right.
Give any AI any atypical problem and you will see it spit big hallucinations. Take the easy ones and then, yes you are faster, but those can and have been done 1 million times. It is just a bounded (in complexity) accelerator for your typical, written many times code. Give it assembly to optimize, SIMD or something with little documentation and you will see how it performs. Bad.
It is the tool for one-off scripts, scaffolding and small apps. Beyond that, it falls short.
It is like a very fast start with a lot of tech debt added for later.
Coming back to software - I believe the author is correct. We will be able to standardize prompts that can create secure deployments and performant applications. We will have agents that can monitor these and deal with 95% of the issues that happen. The other 5% I have no clue. Most of what industry does today needs standarized architecture based on specs anyway. Human innovation via resume driven design generally overcomplicates things.
That’s how Hershey Kisses are made.
I’ve always been more of a Lindt kind of person. Not top of the heap (around here, the current kick is “Dubai Chocolate,” with $20 chocolate bars), but better than average.
I try to move quickly, and not break anything. It does work, but it’s more effort, takes longer, and is more expensive (which is mainly why “move fast and break things” is so popular).
I’m looking forward to “Artisanal” agents, that create better stuff, but won’t have a free tier, and will require experienced drivers.
Apparently, hundreds of millions, maybe billions, of people like Hershey's chocolate (I believe there's a difference in the American version and the Asian/European version, all are bad but the American is beyond sickly sweet and awful). Fine. I will try not to judge, but my god is Hershey's chocolate just awful. I wish I could share a proper dark with every one of those people and tell them to let it melt rather than chew it, to see how amazing chocolate can be (I make it by hand from Pingtung beans that I roast and shell myself, but you can get excellent "bean to bar" chocolate in every major city these days). I wish I could share a cup of proper cappucino from beans roasted that day with everyone that gets a daily Starbucks. I wish I could share a glass of Taihu with everyone that ends the day slamming a couple Buds or Coor's.
But, I guess because it's cheap, or easy, or just because it's what they're used to and now they actually like it, people choose what to me is so terrible as to be almost inedible.
I guess I'm like a spoiled aristocrat or something, scoffing at the peasants for their simple pleasures, but I used to be a broke student, and when my classmates were dropping 5$ a meal on the garbage dining hall burgers, I was making simple one pot paella-like dishes with like 1$ of ingredients, because that tasted better and was healthier, so, I don't know.
Anyway, vibecoded apps probably are bad, but they're bad in the way a Hershey's bar is bad: bad enough to build a global empire powerful enough to rival a Pharaoh, so powerful that it convinces billions that their product is actually good.
You know, I get it, earn those clicks. Spun that hype. Pump that valuation.
Now, go watch people on YouTube like Armin Ronacher (just search, you’ll find him), actually streaming their entire coding practice.
This is what expert LLM usage actually looks like.
People with six terminal running Claude are a lovely bedtime story, but please, if you’re doing it, do me a favour and do some live streams of your awesomeness.
I’d really love to see it.
…but so far, the live coding sessions showing people doing this “everyday 50x engineer” practice don’t seem to exist, and that makes me a bit skeptical.
2. An AI ad generator is one of the worst possible uses of AI I can think of.
People who think this would work and want to make it happen walk among us.
This transition really feels like that. If the metaphor holds (and who knows if it will).
1: the transition will take longer than people expect. I was programming assembler well into the 90's. AI is at the level compilers were in the 50's, where pretty much everybody had to understand assembler.
2: the ability to understand code rather than the spec documents AI work from is valuable and required, but will be required in smaller numbers than most of us expect. Coding experience helps make better spec sheets, but the other skills the original post espouses are also valuable, if not more so. And many of us have let those skills atrophy.
[1] 10 years is questionable. Is being paid $100 for a video game with ~100 hours of work put into it professional work? I have about 3 years of work doing assembly for an actual salary.
To me it looks like a rather bleak outlook on the future, if we all are supposed to work like that.
Assuming everything else the author believes is true, the real camps are "money" and "less money". Those camps already determine the success of businesses to a large extent. But especially in SWE where we traditionally cared less about degrees and more about skill, it's a new thing that "skill" and "experience" are directly cash related, something you can buy and out-source.
Looking for work and need a better github portfolio? Just up your Claude spend. Find yourself needing a promotion at work and in possession of some disposable income? Just pay out of pocket for the AI instead of expecting overtime from your employer or working nights and weekends, because you know you'll make up the difference when you're in charge of your department.
There is some historical precedent for this sort of thing; just read up on the buying and selling of army commissions. That works as well as you might expect, because when "expertise" is purchased like this it turns out that the officers you get are incompetent, and they mostly just fed soldiers into a meat-grinder. https://en.wikipedia.org/wiki/Purchase_of_commissions_in_the...
One of my dad's anecdotes was of someone who was very proud of the fact that they could multiply numbers with a slide rule faster than any of the then-newfangled electronic hand calculators.
A lot of stuff changed between him being born in 1939 and when he took early retirement in the late 90s.
Kinda weird that it's possible he might be one of the first generation programmers and I might be one of the last.
Such rapid change.
I really think AI is tremendously over-hyped and AGI is just selling and making money for the people who believe it is even possible.
These tools are probabilistic parrots and the proof is that when you give them something for which not much documentation exist they start to hallucinate a lot.
I am 100% willing to admin that the guy was cool and had good reason to be proud about that.
Another tool in the box. import pdb is my way still
At least this is good material to imagine some funny future sci-fi scenarios like compiler developers optimizing for AI generated code similarly to how hardware developers sometimes optimize for code generated by some dominant compiler's output. In the far future anthropologists discovering dead programming languages inside long untouched AI generative pipelines and trying to decipher them :)
And who's gonna build the new stuff and not just spit out interpretations based on the already working examples? Man, the AI promoters are something.
Anytime I think the AI bubble can't go any higher I'm reminded of the fact there are people who genuinely believe this. These are the people boardrooms are listening to despite the evidence for actual developer productivity savings being dubious at best: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
What happens when the money goes away and they realize they've been duped into joining a cult?
For me, claude creates plenty of bugs and hallucinations with just one.
Anything besides extremely simple things or extremely overprompted commands comes out 100% broken and wrong.
It is loosely based on reality and not in line with reality.
And there it is: inevitable. The whole article is written in a pseudo-religious manner, probably with the help of "AI" to collate all known talking points.
I think the author is not working on anything that matters. His company is one of a million similar companies that ride the hype wave.
What matters is real software written before 2023 without "AI", which is now stolen and repackaged.
Also, it fails to iterate on complex features. If you are just creating CRUDs, it may work. But even on CRUD scenarios I've seen it completely lose context and use wrong values and things go broke in ways that are hard to track or update.
I'm surprised people work with it and say those things. Are they really using the same tool I use?
I'm sure the problem isn't my prompting, because I've tried watching many videos of people doing the same, and I see the same issues as I've said.
HA HA HA HA HA HA HA HA HA HA HA HA
omg, thanks for the laugh - "bug-free quality in 2-5 years" pfffffft I'm not holding my breath - rather, I think that by then, the hype will have finally lost some steam as companies crash and burn with their shitty, "almost working" codebases.
A perhaps bigger concern is how flimsy the industry itself is. When investors start asking where their returns are at, it's not going to be pretty. The likes of OpenAI and Anthropic are deep in the red, absolutely hemorrhaging money, and they're especially exposed since a big part of their income is from API-deals with VC-funded startups that in turn also have scarlet balance sheets.
Unless we have another miraculous breakthrough that makes these models drastically cheaper to train and operate, or we see massive increases in adoption from people willing to accept significantly higher subscription fees, I just don't see how this is going to end the way the AI optimists think it will.
We're likely looking at something similar to the dot com bubble. It's not that the technology isn't viable or that it's not going to make big waves eventually, it's just that the world needs to catch up to it. Everything people were dreaming of during the dot com bubble did eventually come true, just 15 years later when the logistics had caught up, smartphones had been invented, and the web wasn't just for nerds anymore.
I guess the argument of AI optimists is that these breakthroughs are likely to happen given the recent history. Deep learning was rediscovered like, what, 15 years ago? "Attention is all you need" is 8 years old. So it's easy to assume that something is boiling deep down that will show impressive results 5-10 years down the line.
We have no idea how many of them we need till AGI or at least replacing software engineers though.