Ask HN: Why don't LLMs replace bosses instead of engineers?

12 fzeindl 10 8/12/2025, 8:40:12 AM
I asked myself why all the talk goes into augmenting or replacing engineers instead of the bosses and let ChatGPT formulate my thoughts:

1. Engineers vs. LLMs: low tolerance for mistakes

Engineering reality: If a developer pushes code that’s subtly wrong, you can crash a service, corrupt data, or introduce security flaws.

LLMs today: Great at producing plausible-looking code, but still prone to logical gaps or hidden bugs that might not be obvious until production.

Result: You’d need heavy human oversight anyway — turning the “replacement” into more of a “babysitting” scenario, which could be more costly than just having good engineers write it themselves.

2. CEOs vs. LLMs: higher tolerance for ambiguity

CEO reality: Decisions are often based on incomplete data, lots of gut feeling, and persuasive narrative. There’s more wiggle room — a “wrong” call can sometimes be spun as “strategic” or “visionary” until results catch up.

LLMs today: Excellent at synthesizing multiple data sources, spotting patterns, and generating strategic options — all without bias toward personal ego or politics (well… except whatever biases the training data has).

Result: They could produce coherent, well-justified strategies quickly, and humans could still be the ones to communicate and enact them.

3. Why this actually makes sense

If you think of error cost:

Engineer error = immediate, measurable, costly (bug in production).

CEO error = slower to surface, more subjective, sometimes recoverable with spin.

If you think of data integration skills:

LLMs have superhuman recall and synthesis capabilities.

CEOs need exactly that skill for market intelligence, competitor analysis, and high-level decision frameworks.

So yes — in this framing, replacing CEO-level strategy generation with an LLM and keeping engineers human might actually be more practical right now. Humans would still need to do the “face work” (investor relations, internal morale), but the strategic brain could be an LLM fed with all relevant business data.

Comments (10)

muzani · 17m ago
Anthropic let it manage a vending machine. It did terribly: https://www.anthropic.com/research/project-vend-1

The less subjective answer is LLMs will lean towards the most likely solution. If you have data-driven managers, sure. If you have managers who actually need to ignore some data, then it does terribly. A lot of real strategic worth is knowing when to explore the unknown. Amazon is seen as a robotic company, but they actually take into account that data could be wrong.

We're also finding that it's absolutely terrible in things like design because it picks the popular design, when with design, you often want one that stands out and looks different.

general1726 · 56m ago
Because management is ultimately working with people and managing human resources to achieve results on tasks. If developers will get replaced, management will be redundant and will get replaced too.

Then only C suites remain. That's necessary because you still need to have some decision making process and some vision which you want to achieve.

The problem here is that basically anyone can setup a company of 1-5 people, buy AI model you are using and start competing with you. Ultimate run to the bottom.

And of course this is going to work only for purely software companies. The moment you are working with hardware in any shape or form, you essentially can't replace your workers - be it line manufacturing, embedded development or sys admins. When you have workers you also need to manage them, so AI as a whole has very limited usability in those companies.

tymerry · 2h ago
*The transition is easier/possible*

In the short term AI's perceived benefit is making existing people more efficient. Engineers being more efficient means need for LESS engineers. Downsizing 100 ICs to 90 is lower risk then it is to scale a team of 1 down to 0 (or even a fractional CEO).

If you believe AI predictions to be directionally accurate then we can expect/observe that managers will start gaining more responsibilities/tasks as their efficiency goes up. A place to test this hypothesis would be management consulting companies. If we are seeing the big 3 make layoffs and decreased revenue. I think consulting companies are a valid proxy for this idea because they act as buffer capacity for the work you describe as CEO work.

entuno · 2h ago
Because bosses are the ones making the decisions, and not many of them are going to decide to make themselves redundant.
al_borland · 40m ago
A CEO’s poor decision can bring down an entire company. When this happen, people want someone to blame and a change is made. With an LLM CEO, how does this work? Get rid of the people who picked the model, and switch to a new one? I don’t see that satisfying those impacted by the poor decision.
dileeparanawake · 2h ago
Your point about measuring errors is an interesting one. I think definitely CEO’s / business leaders are very good at deflecting negative responsibility and aggregating positive outcomes. Not exclusively. I know several senior leaders who are very very competent. But I think in business it is typically easier to look good, than do good. Across most domains.

I think the ambiguity part is a bit of an illusion - lots of people who make good predictions on complex things, have good, informal, decision making models. But like an llm, a lot of their minds are black boxes, even unto themselves. Therefore hard to replicate.

rovmut · 2h ago
Interesting framing around error cost. The piece that seems missing is accountability. A core function of a CEO is to be the single person ultimately responsible for a decision. If an engineer ships a bug, their manager is accountable. If an LLM hallucinates a disastrous market strategy, who gets fired? You'd still need a human to formally accept the risk, making the LLM more of an advisor.
dileeparanawake · 2h ago
In theory - CEO's have ultimate responsibility. In practice it's more complex - boards, delegation, company structures, hr etc etc remove a lot of ultimate responsibility from CEO's. The buck stop's here, unless, the CEO's decides otherwise. Carlos Tavares is a good example of this. Got away with more screwups than many senior employee's could dream of. Ditto lots of legacy autos. The board / shareholders typically have a lot of sway and delegated accountability / responsibility.
rovmut · 1h ago
Agree with your point of distribution of responsibility and accountability. My argument was more about bosses vs engineers not particularly about CEO's. You can't let llms take decisions and blame them later if it backfires.It has to be humans to take those high impact decisions and be accountable for the results.
dileeparanawake · 2h ago
It’s so interesting have asked this myself a lot. So firstly - it think this would be an excellent use of AI but the barriers are;

1. Political - CEOs have significant purchasing power. 2. Obfuscation - engineering is (relatively tightly defined) but being a CEO is often more fluid and a lot of the decision making is wrapped in stuff like ‘gut’ instinct. There’s no docs for a CEO. 3. Cultural - we treat CEO’s like art and idolise their value instead of looking at them like a node, that aggregates organisational data flows. To me a CEO is a little like a model router - but with more politics.

I think there’s a huge opportunity to replace CEOs but I think like in engineering that doesn’t happen in one shot - it happens by devolving responsibilities.

I personally stepped down from the business stuff running startups and small companies because to me it feels like BS and into engineering so perhaps I’m biased.

When I ask my CEO mates they’re obviously dogmatically convinced they are irreplaceable.

But I think the devil is in the detail. I’m a relatively junior engineer and was crapping myself about ai taking every entry level job - until you get into it and realise there’s a lot more nuance. At least near term. Same for CEO’s.

I’d love a world where we can focus on engineering outcomes, not the political crap that weighs us down.

My TLDR is I think the main barrier is political, not pure engineering.

But I suppose / hope we can re-engineer the political, with effort.