Ask HN: Are AIs intentionally weak at debugging their code?

5 amichail 6 5/29/2025, 3:32:42 PM
Maybe this is done to encourage software engineers to understand the code that AIs write?

Comments (6)

not_your_vase · 15h ago
Have you noticed that Microsoft, Google and Apple software are still just as full of bugs as they were 5 years ago, if not more - even though all of them are all-in on AI, pedal to the metal? If LLMs would actually understand the code with human-like intelligence, then it would be a few minutes only to go through all open bug tickets, evaluate them, and to fix all the valid bugs, and reply to the invalid reports.

But to this day the best we have are the (unfortunately useless) volunteer replies on the relevant help forums. (And hundreds of unanswered github bugs per project)

superconduct123 · 13h ago
The emperor has no clothes
Pinkthinker · 11h ago
When you think of all the efforts that humans undergo to produce structured pseudo code for their own flawed software, why would we be surprised that an AI would struggle with unstructured text prompts? LLMs will never get there. You need a way to tell the computer exactly what you want, ideally without having to spell out the logic.
amichail · 11h ago
Sometimes the prompt is fine but the code generated has bug(s).

So you tell the AI about the bug(s) and it tries to fix them and sometimes it fails to do so.

I don't think LLMs even try to debug their code by running it in the debugger.

Jeremy1026 · 8h ago
If they do a bad job writing it, what makes you think they'd be good at debugging it? If they could debug it, they'd just write it right the first time.
apothegm · 16h ago
Uh, no. OpenAI and Anthropic and Google and co really, really, really DNGAF whether or not you understand the code their LLMs write for you.

LLMs are not capable of reasoning or following code flow. They’re predictors of next tokens. They’re increasingly astonishingly good at predicting next tokens, to the point that they sometimes appear to be reasoning. But they can’t actually reason.