Chauffeur Knowledge and the Impending AI Crack-Up

13 rglover 7 5/16/2025, 7:54:26 PM ryanglover.net ↗

Comments (7)

Terr_ · 14h ago
> Chauffeur Knowledge

Going into this piece, I expected an analogy where the user is like an out-of-touch wealthy person, who builds a shallow model of the world from what they hear from their LLM chauffeur or golf-caddy.

That is something I fear will spread, as people give too much trust to the assistant in their pocket, turning to it at the expense of the other sources of information.

> That's when it hit me: this is going to change everything; but not in the utopian "everything is magical" sense, but in the "oh, God, what have we done" sense.

I think of it like asbestos, or leaded-gasoline. Incredibly useful in the right situation, but used so broadly that we regret it later. (Or at least, the people who didn't make their fortunes selling it.)

andy99 · 13h ago
This makes me think of eternal September which I'd say the author would argue we've reached with respect to coding.
rglover · 13h ago
I wasn't thinking about that when I wrote this but that's an accurate take.
anon373839 · 11h ago
I think this author makes the mistake, as many people do when they project AI trends forward, of ignoring the feedback mechanisms.

> Third, and I think scariest: it means that programming (and the craft of software) will cease to evolve. We'll stop asking "is there a better way to do this" and transition to "eh, it works." Instead of software getting better over time, at best, it will stagnate indefinitely.

“Eh, it works” isn’t good enough in a competitive situation. Customers will notice if software has weird bugs, is slow, clunky to use, etc. Some bugs can result in legal liability.

When this happens, other firms will be happy to build a better mousetrap to earn the business, and that’s an incentive against stagnation.

Of course, the FAANG type companies aren’t very competitive. But their scale necessitates serious engineering efforts: a bad fuck-up can be really bad, depending on what it is.

rglover · 10h ago
> Customers will notice if software has weird bugs, is slow, clunky to use, etc. Some bugs can result in legal liability.

I'd like to believe this, but some ~30 years into the popular internet, we still have bug-riddled websites and struggle to build simple software that's both usable (UX) and stable. You're right that if a company offers an SLA they're liable, but there's a wide range of software out there that isn't bound by an SLA.

That means that as this thing unfurls, we either get a lot of broken/low quality stuff, or even more consolidation into a few big player's hands.

> When this happens, other firms will be happy to build a better mousetrap to earn the business, and that’s an incentive against stagnation.

I agree this is likely, but the how is important. The hypothetical cost reduction of doing away with competent staff in favor of AI-augmented devs (or just agents) is too juicy for it to not become the norm (at least in the short-term). We're already seeing major players enforce AI-driven development (and in some cases, like Salesforce, impose hiring freezes under the assumption that AI is enough for most tasks).

The optimist in me agrees with what you're saying, but my gut take is that there will be a whole boatload of irrational, careless behavior before we see any meaningful correction.

satisfice · 11h ago
The repeated use of the phrase “it works” is unhelpful. What the author means is “it appears to work.”

There is a vast difference between actually working and looking superficially like it works.

This is a massive testing problem.

rglover · 10h ago
That's the thing, though, it does work. Does it need a fair amount of hand holding to get there for non-trivial tasks? Yes. But if you have a basic skill set and some patience, you can get impressive results with a shocking number of common problems.

You're right that the "working" is superficial in a one-shot sense. But if you spend the time to nudge it in the right direction, you can accomplish a lot.