> There’s a strange disconnect in the industry. On one hand, GitHub claims that 20 million users are on Copilot, and Sundar Pichai says that over 25% of the code at Google is now written by AI. On the other hand, independent studies show that AI actually makes experienced developers slower.
From the study[0]:
> 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience.
This study continues to get a lot of airtime on HN and elsewhere. Folks probably should be skeptical of a study with the combination of a small number of subjects with a broad claim.
Shouldn’t users be equally skeptical of claims by ai companies? Id argue things even out in that case?
jwitthuhn · 2h ago
Anyone pointing to that as proof that AI slows developers down has not actually read it. See appendix B
"We do not provide evidence that: AI systems do not currently speed up many
or most software developers"
ninetyninenine · 2h ago
Why is stuff like this making it to the front page?
He Looks like he’s a typical software engineer with a very very generic opinion on AI presenting nothing new.
The arrogance the article starts off with like where he talks about how much time he’s invested in AI (1.5 years holy cow) and how that makes him qualified to give his generic opinion is just too much.
tempodox · 2h ago
Who would be qualified to give an opinion then?
ninetyninenine · 1h ago
Tons of more qualified people. George Hinton for one. That’s not even the main issue though. I don’t care who he is.
The point is this opinion is generic. It’s nothing new. It’s like someone stating “cars use gas, I’ve been driving for 1.5 years and I learned enough to say that cars use gas.”
> For everything I don’t like doing, AI is phenomenally good. Take design, for instance.
I've seen this sentiment quite a bit; I think it's really baked into human psyche. I think we understate the importance of what we don't enjoy and perhaps overstate the importance of the tasks we do enjoy and excel at. It makes sense, we're defending ourselves and our investments in learning our particular craft.
smokel · 3h ago
> The company that creates an AGI first will win and get the most status.
I doubt it. History has shown that credit for an invention often goes to the person or company with superior marketing skills, rather than to the original creator.
In a couple of centuries, people will sincerely believe that Bill Gates invented software, and Elon Musk invented self-driving cars.
Edit: and it's probably not even about marketing skill, but about being so full of oneself to have biographies printed and making others believe how amazing they are.
jacobedawson · 2h ago
Without sidetracking with definitions, there's a strong case to make that developing AGI is a winner takes all event. You would have access to any number of tireless human level experts that you could put to work at improving the AGI system, likely leading to ASI in a short amount of time, with a lead of even a day growing exponentially.
Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.
amelius · 3h ago
A few decades ago I thought that the first person to create AGI would instantly receive a Nobel prize/Turing award.
But my opinion on this has shifted a lot. The underlying technology is pretty lame. And improvements are incremental. Yes, someone will be the first, but they will be closely followed by others.
Anyway, I don't like the "impending doom" feeling that these companies create. I think we should tax them for it. If you throw together enough money, yeah, you can do crazy things. Doesn't mean you should be able to harass people with it.
XCSme · 3h ago
I doubt LLMs will lead to AGI.
Yes, it gets "smarter" each time, more accurate, but still lacks ANY creativity or actual thoughts/understanding. "You're completely right! Here, I fixed the code!" - proceeds to copy-paste the original code with the same bug.
LLMs will mostly replace:
- search (find information/give people expert-level advice in a few seconds)
- data processing (retrieval of information, listening and react to specific events, automatically transforming and forwarding of information)
- interfaces (not entirely, but mostly augment them, sort of a better auto-complete and intention-detection).
- most content ideation and creation (will not replace art, but if someone needs an ad, a business card, landing page, etc., the AI will do a good enough job).
- finding errors in documents/code/security, etc.
All those use-cases are already possible today, AI will just make them more efficient.
It will be a LONG time until AI will know how to autonomously achieve the result you want and have the physical-world abilities to do so.
For AGI, the "general" part will be as broad as the training data. Also, now the AI listens too much to the human instruction and is crippled for (good) safety reasons. While we have all those limitations set, the "general intelligence" will still be limited, as it would be too dangerous to set zero limits and see where it goes (not because it's smart, but it's like letting a malware have access to the internet).
No comments yet
smokel · 2h ago
> The underlying technology is pretty lame.
This depends on the perspective. Take a step back and consider what the actual technology is that makes this possible: neural networks, the transistor, electricity, working together in groups? All pretty cool, IMHO.
apples_oranges · 3h ago
And Sam Altman invented AI
begueradj · 3h ago
I agree.
For example, electric cars were around already in the mid 1800s. But some people believe Elon Musk is the original inventor.
pineaux · 3h ago
Some people believe the earth is flat. I doubt that the invention of the electric car will be attributed to musk. The invention of the car is not attributed to Ford either...
netown · 3h ago
> the people who gain the most from all these tools are not the developers—it’s all the people around who don’t write code
this insight stood out the most to me. i def agree, but what's interesting is the disconnect with the industry--it seems to be accepted rn that if coding is what ai is best at, developers must be the only ones that care, and that seems to have shown up in usage as well (i don't think i've seen much use of ai outside of personal use other than by developers, maybe i'm wrong?)
lowsong · 4h ago
> the future of software engineering will inevitably have more AI within it
Probably not. We're deep in the hype bubble, so AI is strongly overused. Once the bubble pops and things calm down, some use-cases may well emerge from the ashes but it'll be nowhere near as overused as it is now.
> AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status.
There's a built-in assumption here that AGI is not only possible but inevitable. We have absolutely no evidence that's the case, and the only people saying we're even remotely close are tech CEOs who's entire business model depends on people believing that AGI is around the corner.
rco8786 · 4h ago
> We're deep in the hype bubble, so AI is strongly overused
I don't think these things are really that correlated. In fact, kind of the opposite. Hype is all talk, not actual usage.
I think this will turn out more like the internet itself. Wildly overhyped and underused when the dotcom bubble burst. But over the coming years and decades it grew steadily and healthily until it was everywhere.
Agreed re: AGI though.
codingdave · 3h ago
That is not how the dotcom bubble burst. Internet usage was growing fast before, during, and after the bubble. The bubble was about silly investments into it that had no business model - that investment insanity is what burst, not overall usage.
rco8786 · 3h ago
I think I am saying the same thing. My comment about "underused" should have been "underused relative to the investment dollars pouring in"
satyrun · 3h ago
Many of the business models were good too but they had the timing wrong.
Petfoods.com IPO for about $300 million. $573 million adjusted for inflation.
Chewy is at a 14 billion market cap right now.
I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.
rco8786 · 2h ago
> I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.
Curious to hear more here. What is lazy about it? My general hypothesis is that ~95% of AI companies are overvalued by an order of magnitude or more and will end up with huge investor losses. But longer term (10+ years) many will end up being correctly valued at an order of magnitude above today's levels. This aligns perfectly with your pets.com/Chewy example.
tovej · 3h ago
Certain parts of what we call AI will definitely be used more in the future: facial recognition, surrogate models, video generation.
I don't, however, see LLMs as consumer products being that prevalent in the future as currently. The cost of using LLMs is kept artificially low for consumers at the moment. That is bound to hit a wall eventually, at the very least when the bubble pops. At least that seems like an obvious analysis to make at this point in time.
rco8786 · 3h ago
The cost angle is interesting, I'm not close enough to the industry to say for sure. But from what I'm reading inference and tokens are only getting cheaper.
Regarding usage - I don't think LLMs are going away. I think LLMs are going to be what finally topples Google search. Even my less technical friends and acquaintances are frequently reaching for ChatGPT for things they would have Googled in the past.
I also think we'll see the equivalent of Google Adwords start to pop up in free consumer LLMs.
dude250711 · 4h ago
It could have been better if the entire codebase could always be provided to AI as the context. Otherwise, specifying exactly what you want is one step away from just doing it yourself.
> As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks.
I do not understand why they are not marketing some "GPT Middle Manager" to the executive boards so that they could cut that fat. Surely that is a huge untapped cost-cutting potential?
anonzzzies · 3h ago
Cannot be worse than almost all human managers, so agreed. I am a terrible manager myself; i'm a good ceo/cto, making profits and keeping things running for almost no money, but managing i'm terrible at. And I haven't seen many who couldn't be replaced by a piece of cardboard. There are exceptions, but AI's can just as well do terrible team management while keeping their upper managers/c-levels busy with nonsense documents as is the standard for humans too. Indeed yesterday I wrote on HN that this is what LLMs are VERY good for; generating ENORMOUS piles of paper to give to all types of (middle) management to make them feel valued.
justanotherjoe · 3h ago
The obvious next step is where you can easily put new knowledge inside the parameters of the model itself.
I want the AI to know my codebase the same way it knows the earth is round. Without any context fed to it each time.
Instead we have this weird Memento-esque setup where you have to give it context each time.
nikolayasdf123 · 4h ago
they are already doing that. full-steam on. just look at HSBC made their goal to layoff middle-managers. and so does Google, Microsoft, others.
watwut · 3h ago
We already had that cycle with agile. I predict half baked models, chaos, then backslash with hiring even more managers combining models and management into one large innefective bundle.
The ones profiting the most will be consultancies designed to protect the upper management reputation.
No comments yet
sp527 · 39m ago
> This is why Python or JavaScript are great languages to start with (even if JavaScript is a horrible language)
The author was hemorrhaging credibility all along the way and then this comment really drove home what he is: a bike shedder who probably deliberately introduces complexity into projects to keep himself employed. If you read between the lines of this post, it is clearly a product of that mindset and motivation.
'AI is only good at the simple parts that I don't like, but it's bad at the simple parts I do like and that are my personal expertise and keep me employed.'
Yeah okay buddy.
demirbey05 · 4h ago
> If an AI can replace these repeated tasks, I could spend more time with my fiancé, family, friends, and dog, which is awesome, and I am looking forward to that.
I could not understand this optimism, aren't we living in a capitalist world ?
shafyy · 2h ago
Exactly. I am yet to see the manager that says to their employees: "Ah nice, you became 10% more efficient using AI, from now on you can work 4 hours less every week".
_heimdall · 3h ago
I don't think its about capitalism, people have repeatedly shown we simply just don't like idle time over the long run.
Plenty of people could already work less today if they just spent less. Historically any of the last big productivity booms could have similarly let people work less, but here we are.
If AI actually comes about and if AGI replaces humans at most cognitive labor, we'll find some way to keep ourselves busy even if the jobs ultimately are as useless as the pet rock or the Jump to Conclusions Mat (Office Space reference for anyone who hasn't seen it).
chongli · 3h ago
I don’t think it’s that simple. Productivity gains are rarely universal. Much of the past century’s worth of advancement into automation and computing technology has generated enormous productivity gains in manufacturing, communication, and finance industries but had little or no benefit for a lot of human capital-intensive sectors such as service and education.
It still takes basically the same amount of labour hours to give a haircut today as it did in the late 19th century. An elementary school teacher today can still not handle more than a few tens up to maybe a hundred students at the extreme limit. Yet the hairdressing and education industries must still compete — on the labour market — with the industries showing the largest productivity gains. This has the effect of raising wages in these productivity-stagnant industries and increasing the cost of these services for everyone, driving inflation.
Inflation is the real time-killer, not a fear of idleness. The cost of living has gone up for everyone — rather dramatically, in nominal terms — without even taking housing costs into account.
AlecSchueler · 3h ago
> idle time
But they're not talking about idle time, they're talking about quality time with loved ones.
> Plenty of people could already work less today if they just spent less.
But spending for leisure is often a part of that quality time. The idea is being able to work less AND maintain the same lifestyle.
smokel · 3h ago
It's slightly more complicated than that. If people work less, they make less money, and that means they can't buy a house, to name just one example. Housing is not getting any cheaper for a myriad of reasons. The same goes for healthcare, and even for drinking beer.
People could work less, but it's a group effort. As long as some narcissistic idiots who want more instead of less are in charge, this is not going to change easily.
smartmic · 3h ago
Yes, and now we have come full circle back to capitalism. As soon as a gap forms between capital and untapped resources, the capitalist engine keeps running: the rich get richer and the poor get poorer. It is difficult or impossible to break out of this on a large scale.
pineaux · 3h ago
The poor dont necessarily get poorer. That is not a given in capitalism. But at some point capitalism will converge to feudalism, at that point, the poor will become slaves.
And if not needed, culled. For being "unproductive" or "unattractive" or generally "worthless".
That's my cynical take.
As long as the rich can be reigned in in a way, the poor will not necessarily become poorer.
shafyy · 3h ago
In neoliberal capitalism they do, though. Because companies can maximize profits without internalizing external costs (such as health care, social welfare, environmental costs).
aredox · 3h ago
It is indeed completely stupid: if he can do that, others can too, which means they can be more productive than he is, and the only way he would spend more time with his fiancé, family, friends, and dog is by becoming quickly unemployed.
deadbabe · 3h ago
Yes this is what people constantly get wrong about AI. When AI starts to replace certain tasks, we will then create newer, larger tasks that will keep us busy, even when using AI to its full advantage.
balfirevic · 1h ago
Do you expect AI to stop becoming more capable before it can do every economically useful task better than any human?
demirbey05 · 2h ago
That's what I meant. I don't think boss wants you to pay same money with less work time.
pydry · 3h ago
or you'll be kicked out on to the street and shamed for being jobless.
tigrezno · 3h ago
Capitalism is ending with AGI/ASI, that's for sure.
XCSme · 3h ago
I am pretty sure UBI will be at least tested at a large scale in our lifetime.
shafyy · 3h ago
In the US, they can't even figure out universal healthcare, do you really think they are giving to go for UBI?
glhaynes · 1h ago
Huge numbers of desperate, armed, unemployed people have a way of focusing the will.
sp527 · 44m ago
That's what Anduril is for
XCSme · 2h ago
I am from EU, so I can see it happening here, or in some smaller countries. Here, you already sort-of have an UBI, where you get enough social benefits to live off if unemployed.
From the study[0]:
> 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience.
This study continues to get a lot of airtime on HN and elsewhere. Folks probably should be skeptical of a study with the combination of a small number of subjects with a broad claim.
[0] https://arxiv.org/pdf/2507.09089
"We do not provide evidence that: AI systems do not currently speed up many or most software developers"
He Looks like he’s a typical software engineer with a very very generic opinion on AI presenting nothing new.
The arrogance the article starts off with like where he talks about how much time he’s invested in AI (1.5 years holy cow) and how that makes him qualified to give his generic opinion is just too much.
The point is this opinion is generic. It’s nothing new. It’s like someone stating “cars use gas, I’ve been driving for 1.5 years and I learned enough to say that cars use gas.”
I've seen this sentiment quite a bit; I think it's really baked into human psyche. I think we understate the importance of what we don't enjoy and perhaps overstate the importance of the tasks we do enjoy and excel at. It makes sense, we're defending ourselves and our investments in learning our particular craft.
I doubt it. History has shown that credit for an invention often goes to the person or company with superior marketing skills, rather than to the original creator.
In a couple of centuries, people will sincerely believe that Bill Gates invented software, and Elon Musk invented self-driving cars.
Edit: and it's probably not even about marketing skill, but about being so full of oneself to have biographies printed and making others believe how amazing they are.
Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.
But my opinion on this has shifted a lot. The underlying technology is pretty lame. And improvements are incremental. Yes, someone will be the first, but they will be closely followed by others.
Anyway, I don't like the "impending doom" feeling that these companies create. I think we should tax them for it. If you throw together enough money, yeah, you can do crazy things. Doesn't mean you should be able to harass people with it.
Yes, it gets "smarter" each time, more accurate, but still lacks ANY creativity or actual thoughts/understanding. "You're completely right! Here, I fixed the code!" - proceeds to copy-paste the original code with the same bug.
LLMs will mostly replace: - search (find information/give people expert-level advice in a few seconds) - data processing (retrieval of information, listening and react to specific events, automatically transforming and forwarding of information) - interfaces (not entirely, but mostly augment them, sort of a better auto-complete and intention-detection). - most content ideation and creation (will not replace art, but if someone needs an ad, a business card, landing page, etc., the AI will do a good enough job). - finding errors in documents/code/security, etc.
All those use-cases are already possible today, AI will just make them more efficient.
It will be a LONG time until AI will know how to autonomously achieve the result you want and have the physical-world abilities to do so.
For AGI, the "general" part will be as broad as the training data. Also, now the AI listens too much to the human instruction and is crippled for (good) safety reasons. While we have all those limitations set, the "general intelligence" will still be limited, as it would be too dangerous to set zero limits and see where it goes (not because it's smart, but it's like letting a malware have access to the internet).
No comments yet
This depends on the perspective. Take a step back and consider what the actual technology is that makes this possible: neural networks, the transistor, electricity, working together in groups? All pretty cool, IMHO.
this insight stood out the most to me. i def agree, but what's interesting is the disconnect with the industry--it seems to be accepted rn that if coding is what ai is best at, developers must be the only ones that care, and that seems to have shown up in usage as well (i don't think i've seen much use of ai outside of personal use other than by developers, maybe i'm wrong?)
Probably not. We're deep in the hype bubble, so AI is strongly overused. Once the bubble pops and things calm down, some use-cases may well emerge from the ashes but it'll be nowhere near as overused as it is now.
> AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status.
There's a built-in assumption here that AGI is not only possible but inevitable. We have absolutely no evidence that's the case, and the only people saying we're even remotely close are tech CEOs who's entire business model depends on people believing that AGI is around the corner.
I don't think these things are really that correlated. In fact, kind of the opposite. Hype is all talk, not actual usage.
I think this will turn out more like the internet itself. Wildly overhyped and underused when the dotcom bubble burst. But over the coming years and decades it grew steadily and healthily until it was everywhere.
Agreed re: AGI though.
Petfoods.com IPO for about $300 million. $573 million adjusted for inflation.
Chewy is at a 14 billion market cap right now.
I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.
Curious to hear more here. What is lazy about it? My general hypothesis is that ~95% of AI companies are overvalued by an order of magnitude or more and will end up with huge investor losses. But longer term (10+ years) many will end up being correctly valued at an order of magnitude above today's levels. This aligns perfectly with your pets.com/Chewy example.
I don't, however, see LLMs as consumer products being that prevalent in the future as currently. The cost of using LLMs is kept artificially low for consumers at the moment. That is bound to hit a wall eventually, at the very least when the bubble pops. At least that seems like an obvious analysis to make at this point in time.
Regarding usage - I don't think LLMs are going away. I think LLMs are going to be what finally topples Google search. Even my less technical friends and acquaintances are frequently reaching for ChatGPT for things they would have Googled in the past.
I also think we'll see the equivalent of Google Adwords start to pop up in free consumer LLMs.
> As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks.
I do not understand why they are not marketing some "GPT Middle Manager" to the executive boards so that they could cut that fat. Surely that is a huge untapped cost-cutting potential?
I want the AI to know my codebase the same way it knows the earth is round. Without any context fed to it each time.
Instead we have this weird Memento-esque setup where you have to give it context each time.
The ones profiting the most will be consultancies designed to protect the upper management reputation.
No comments yet
The author was hemorrhaging credibility all along the way and then this comment really drove home what he is: a bike shedder who probably deliberately introduces complexity into projects to keep himself employed. If you read between the lines of this post, it is clearly a product of that mindset and motivation.
'AI is only good at the simple parts that I don't like, but it's bad at the simple parts I do like and that are my personal expertise and keep me employed.'
Yeah okay buddy.
I could not understand this optimism, aren't we living in a capitalist world ?
Plenty of people could already work less today if they just spent less. Historically any of the last big productivity booms could have similarly let people work less, but here we are.
If AI actually comes about and if AGI replaces humans at most cognitive labor, we'll find some way to keep ourselves busy even if the jobs ultimately are as useless as the pet rock or the Jump to Conclusions Mat (Office Space reference for anyone who hasn't seen it).
It still takes basically the same amount of labour hours to give a haircut today as it did in the late 19th century. An elementary school teacher today can still not handle more than a few tens up to maybe a hundred students at the extreme limit. Yet the hairdressing and education industries must still compete — on the labour market — with the industries showing the largest productivity gains. This has the effect of raising wages in these productivity-stagnant industries and increasing the cost of these services for everyone, driving inflation.
Inflation is the real time-killer, not a fear of idleness. The cost of living has gone up for everyone — rather dramatically, in nominal terms — without even taking housing costs into account.
But they're not talking about idle time, they're talking about quality time with loved ones.
> Plenty of people could already work less today if they just spent less.
But spending for leisure is often a part of that quality time. The idea is being able to work less AND maintain the same lifestyle.
People could work less, but it's a group effort. As long as some narcissistic idiots who want more instead of less are in charge, this is not going to change easily.
And if not needed, culled. For being "unproductive" or "unattractive" or generally "worthless".
That's my cynical take.
As long as the rich can be reigned in in a way, the poor will not necessarily become poorer.