The actual title of the article is: “AI doesn't make devs as productive as they think, study finds”
Literal quote from one of the study’s authors, which is quoted in the article:
> The researchers are keen to highlight that the results don’t conclude that AI assistance hinders developer productivity. “We certainly would discourage anyone from interpreting these results as: ‘AI slows down developers’,” says Rush. Instead, the authors say it could be used to help better inform people how to use AI appropriately.
Although the headline is not completely accurate, I do think that AI in general encourages for superficiality at the expense of depth and this will only continue in the software world. AI was a mistake.
scottLobster · 4h ago
Like all tools it depends on how you use it and your prior knowledge.
I've found AI to be extremely useful for providing example code customized to my situation when using new libraries.
But for debugging, as appears to be the case in this study, I don't see how an LLM is much more helpful than static analysis. It can't build, run or test the code, it can't experiment, it can't construct a meaningful simulation of the running code to do any of the above. Might be good for a first pass, but otherwise you're back to basics.
nine_k · 4h ago
People working on familiar codebases for years, using well-understood, sometimes even polished workflows, are not easy to help go dramatically faster.
People dabbling in unfamiliar areas, or people doing repetitive work which hasn't been otherwise automated seem to gain most from AI coding tools. LLMs are good at fast web search, adaptive stackoveflow copy-pastung, and quickly coming up with semi-trivial text transforms.
CamperBob2 · 4h ago
AI tools, or incompetent management forcing devs to use tools they wouldn't otherwise use for the jobs at hand?
I've heard some ridiculous stories on here... things like "We're now required to generate X% of our code with AI." That would torpedo my productivity regardless of whether the tool is AI, the latest/greatest VCS or IDE or stack, hiring gratuitous consultants, or ripping code straight out of Stack Overflow.
eqvinox · 4h ago
"… These issues are then
randomized to one or the other condition via a simulated fair coin flip. If AI is allowed, developers can use any AI tools or models they choose, including no AI tooling if they expect it to not be helpful. …"
Literal quote from one of the study’s authors, which is quoted in the article:
> The researchers are keen to highlight that the results don’t conclude that AI assistance hinders developer productivity. “We certainly would discourage anyone from interpreting these results as: ‘AI slows down developers’,” says Rush. Instead, the authors say it could be used to help better inform people how to use AI appropriately.
https://news.ycombinator.com/item?id=44522772 (750 points, 478 comments)
https://news.ycombinator.com/item?id=44526912 (276 points, 272 comments)
I've found AI to be extremely useful for providing example code customized to my situation when using new libraries.
But for debugging, as appears to be the case in this study, I don't see how an LLM is much more helpful than static analysis. It can't build, run or test the code, it can't experiment, it can't construct a meaningful simulation of the running code to do any of the above. Might be good for a first pass, but otherwise you're back to basics.
People dabbling in unfamiliar areas, or people doing repetitive work which hasn't been otherwise automated seem to gain most from AI coding tools. LLMs are good at fast web search, adaptive stackoveflow copy-pastung, and quickly coming up with semi-trivial text transforms.
I've heard some ridiculous stories on here... things like "We're now required to generate X% of our code with AI." That would torpedo my productivity regardless of whether the tool is AI, the latest/greatest VCS or IDE or stack, hiring gratuitous consultants, or ripping code straight out of Stack Overflow.