Ask HN: Why don't LLMs replace bosses instead of engineers?
1. Engineers vs. LLMs: low tolerance for mistakes
Engineering reality: If a developer pushes code that’s subtly wrong, you can crash a service, corrupt data, or introduce security flaws.
LLMs today: Great at producing plausible-looking code, but still prone to logical gaps or hidden bugs that might not be obvious until production.
Result: You’d need heavy human oversight anyway — turning the “replacement” into more of a “babysitting” scenario, which could be more costly than just having good engineers write it themselves.
2. CEOs vs. LLMs: higher tolerance for ambiguity
CEO reality: Decisions are often based on incomplete data, lots of gut feeling, and persuasive narrative. There’s more wiggle room — a “wrong” call can sometimes be spun as “strategic” or “visionary” until results catch up.
LLMs today: Excellent at synthesizing multiple data sources, spotting patterns, and generating strategic options — all without bias toward personal ego or politics (well… except whatever biases the training data has).
Result: They could produce coherent, well-justified strategies quickly, and humans could still be the ones to communicate and enact them.
3. Why this actually makes sense
If you think of error cost:
Engineer error = immediate, measurable, costly (bug in production).
CEO error = slower to surface, more subjective, sometimes recoverable with spin.
If you think of data integration skills:
LLMs have superhuman recall and synthesis capabilities.
CEOs need exactly that skill for market intelligence, competitor analysis, and high-level decision frameworks.
So yes — in this framing, replacing CEO-level strategy generation with an LLM and keeping engineers human might actually be more practical right now. Humans would still need to do the “face work” (investor relations, internal morale), but the strategic brain could be an LLM fed with all relevant business data.
The less subjective answer is LLMs will lean towards the most likely solution. If you have data-driven managers, sure. If you have managers who actually need to ignore some data, then it does terribly. A lot of real strategic worth is knowing when to explore the unknown. Amazon is seen as a robotic company, but they actually take into account that data could be wrong.
We're also finding that it's absolutely terrible in things like design because it picks the popular design, when with design, you often want one that stands out and looks different.
Then only C suites remain. That's necessary because you still need to have some decision making process and some vision which you want to achieve.
The problem here is that basically anyone can setup a company of 1-5 people, buy AI model you are using and start competing with you. Ultimate run to the bottom.
And of course this is going to work only for purely software companies. The moment you are working with hardware in any shape or form, you essentially can't replace your workers - be it line manufacturing, embedded development or sys admins. When you have workers you also need to manage them, so AI as a whole has very limited usability in those companies.
In the short term AI's perceived benefit is making existing people more efficient. Engineers being more efficient means need for LESS engineers. Downsizing 100 ICs to 90 is lower risk then it is to scale a team of 1 down to 0 (or even a fractional CEO).
If you believe AI predictions to be directionally accurate then we can expect/observe that managers will start gaining more responsibilities/tasks as their efficiency goes up. A place to test this hypothesis would be management consulting companies. If we are seeing the big 3 make layoffs and decreased revenue. I think consulting companies are a valid proxy for this idea because they act as buffer capacity for the work you describe as CEO work.
I think the ambiguity part is a bit of an illusion - lots of people who make good predictions on complex things, have good, informal, decision making models. But like an llm, a lot of their minds are black boxes, even unto themselves. Therefore hard to replicate.
1. Political - CEOs have significant purchasing power. 2. Obfuscation - engineering is (relatively tightly defined) but being a CEO is often more fluid and a lot of the decision making is wrapped in stuff like ‘gut’ instinct. There’s no docs for a CEO. 3. Cultural - we treat CEO’s like art and idolise their value instead of looking at them like a node, that aggregates organisational data flows. To me a CEO is a little like a model router - but with more politics.
I think there’s a huge opportunity to replace CEOs but I think like in engineering that doesn’t happen in one shot - it happens by devolving responsibilities.
I personally stepped down from the business stuff running startups and small companies because to me it feels like BS and into engineering so perhaps I’m biased.
When I ask my CEO mates they’re obviously dogmatically convinced they are irreplaceable.
But I think the devil is in the detail. I’m a relatively junior engineer and was crapping myself about ai taking every entry level job - until you get into it and realise there’s a lot more nuance. At least near term. Same for CEO’s.
I’d love a world where we can focus on engineering outcomes, not the political crap that weighs us down.
My TLDR is I think the main barrier is political, not pure engineering.
But I suppose / hope we can re-engineer the political, with effort.