Ask HN: If AI is intelligent, why do we still need programming?
6 alwinaugustin 4 4/25/2025, 7:53:14 AM
There's a lot of talk about "AI code generation," but if AI were truly intelligent—able to understand and solve problems autonomously—would we still need programming at all?
Programming is how we translate understanding into instructions. If AI had intelligence, wouldn’t it just solve problems directly, without needing prompts or code?
Current tools still depend on human guidance and structured inputs. So is this really “intelligence,” or are we anthropomorphizing what’s essentially pattern-based automation?
Should we start calling it what it is—statistical code synthesis—rather than framing it as "intelligent" code generation?
2. Most human software developers cannot architect or write original applications on their own. We are extremely far away from software doing any of that, especially from doing it better than the few humans who can do it.
3. Most human developers cannot measure things. I know we are all taught to use rulers as little children in school, nonetheless most developers cannot gather data about software. Unexpectedly, AI is even worse at this than humans, which is an astonishingly critical failure.
4. LLMs hallucinate at a current rate of at least 1 in 20 inquiries and as early as 1 in 6. That is a tremendous amount of risk to accept. Humans that make mistakes at that frequency, without regard for the harms, tend to go to jail for fraud.
In summary, AI is currently really good at writing code that is only one or two layers more abstract than copy/paste. That is actually enough to entirely replace a great many, possibly most, developers. Business is not willing to take that step, however, because the low trust in AI and high failure rate introduce more costs than value returned at higher risk.
A safer strategy is to only select human developers more capable of writing original software solutions and/or training humans to do so. Businesses have historically been unwilling to do this due to their inability to account for bias and retain employees. Because of this historic inability to commit to a human/code selection solution it is likewise safe to assume it will not commit on a AI solution so frequently prone to critical failure.