> I’ve often heard, with decent reason, an LLM compared to a junior colleague.
No, they're like an extremely experienced and knowledgeable senior colleague – who drinks heavily on the job. Overconfident, forgetful, sloppy, easily distracted. But you can hire so many of them, so cheaply, and they don't get mad when you fire them!
ares623 · 1h ago
> Other forms of engineering have to take into account the variability of the world.
> Maybe LLMs mark the point where we join our engineering peers in a world on non-determinism.
Those other forms of engineering have no choice due to the nature of what they are engineering.
Software engineers already have a way to introduce determinism into the systems they build! We’re going backwards!
didericis · 1h ago
Part of what got me into software was this: no matter how complex or impressive the operation, with enough time and determination, you could trace each step and learn how a tap on a joystick lead to the specific pixels on a screen changing.
There’s a beautiful invitation to learn and contribute baked into a world where each command is fully deterministic and spec-ed out.
Yes, there have always been poorly documented black boxes, but I thought the goal was to minimize those.
People don’t understand how much is going to be lost if that goal is abandoned.
pton_xd · 39m ago
Agreed. The beauty of programming is that you're creating a "mathematical artifact." You can always drill down and figure out exactly what is going on and what is going to happen with a given set of inputs. Now with things like concurrency that's not exactly true, but, I think the sentiment still holds.
The more practical question is though, does that matter? Maybe not.
didericis · 14m ago
> The more practical question is though, does that matter?
I think it matters quite a lot.
Specifically for knowledge preservation and education.
DSingularity · 24m ago
In a way this is also a mathematical artifact — after all tokens are selected through beam searching or some random sampling of likely successor tokens.
sodapopcan · 1h ago
As pertaining to software development, I agree. I've been hearing accounting (online and from coworkers) of using LLMs to do deterministic stuff. And yet, instead of at least prompting once to "write a script to do X," they just keep prompting "do X" over and over again. Seems incredibly wasteful. It feels like there is this thought of "We are not making progress if we aren't getting the LLM to do everything. Having it write a script we can review and tweak is anti-progress." No one has said that outright, but it's a gut feeling (and it wouldn't surprise me if people have said this out loud).
tptacek · 1h ago
This is the 2025 equivalent of the people who once wrote 2000 word blog posts about how bad it was to use "cat" instead of just shell redirection.
sodapopcan · 28m ago
These are hardly equivalent. One is someone preferring one deterministic way over another. The other is more akin to arguing that it's better to ask someone to manually complete a task for you instead of caching the instructions on your computer. Now if the LLM does caching then you have more of a point, I don't have enough experience there.
adding to this, software deals with non-determinism all the time.
For example, web requests are non-deterministic. They depend, among other things, on the state of the network. They also depend on the load of the machine serving the request.
One way to think about this is: how easy is it for you to produce byte-for-byte deterministic builds of the software you're working on? If it's not trivial there's more non-determinism than is obvious.
anthem2025 · 5m ago
It’s not trivial largely because we didn’t bother to design deterministic builds because it didn’t seem to matter. There is not much about the actual problem that makes it difficult.
skydhash · 46m ago
Mostly the engineering part of software is dealing with non-determinism, by avoiding it or enforcing determinism. Take something like TCP, it's all about guaranteeing the determinism that either the message is sent and received or it is not. And we have a lot of algorithms that tries to guarantee consistency of information between the elements of a system.
ares623 · 32m ago
But there is an underlying deterministic property in the TCP example. A message is either received within a timeout or not.
How can that be extralopated with LLMs? How does a system independently know that it's arrived at a correct answer within a timeout or not? Has the halting problem been solved?
tptacek · 4m ago
You don't need to solve the halting problem in this situation, because you only need to accept a subset of valid, correct programs.
ants_everywhere · 42m ago
Right, you handle determinism by applying engineering. E.g. having fail safes, redundancies, creating robust processes, etc.
delusional · 1h ago
I would rather say it like this: Very good, very hardworking engineers spent years of their lives building the machine that raised us from the non-determinism of messy physical reality. The technology that brought us perfect, replicable, and reliable math from sending electrons through rocks has been deeply underappreciated in the "software revolution".
The engineers at TSMC, Intel, Global Foundries, Samsung, and others have done us an amazing service, and we are throwing all that hard work away.
AaronAPU · 50m ago
It was forward when Newton discovered the beautiful simple determinism of physics.
Was it going backwards when the probabilistic nature of quantum mechanics emerged?
Viliam1234 · 8m ago
Two words: many-world interpretation.
More seriously, this is not a fair comparison. Adding LLM output to your source code is not analogical to quantum physics; it is analogical to letting your 5 years old child transcribe the experimentally measured values without checking and accepting that many of them will be transcribed wrong.
BlueTemplar · 1m ago
Hopefully even Newton already had some awareness of "deterministic chaos" (or whatever terms they would have used back then) ?
And on the side hand, no transistors without quantum mechanics.
sebnukem2 · 2h ago
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.
Nice.
anthem2025 · 3m ago
Isn’t that why people argue against calling them hallucinations?
It implies that some parts of the output aren’t hallucinations, when the reality is that none of it has any thought behind it.
nine_k · 15m ago
I'd rather say that LLMs live in a world that consists entirely of stories, nothing but words and their combinations. Thy have no other reality. So they are good at generating more stories that would sit well with the stories they already know. But the stories are often imprecise, and sometimes contradictory, so they have to guess. Also, LLMs don't know how to count, but they know that two usually follows one, and three is usually said to be larger than two, so they can speak in a way that mostly does not contradict this knowledge. They can use tools to count, like a human who knows digits would use a calculator.
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
tptacek · 1h ago
In that framing, you can look at an agent as simply a filter on those hallucinations.
armchairhacker · 57m ago
This vaguely relates to a theory about human thought: that our subconscious constantly comes up with random ideas, then filters the unreasonable ones, but in people with delusions (e.g. schizophrenia) the filter is broken.
Salience (https://en.wikipedia.org/wiki/Salience_(neuroscience)), "the property by which some thing stands out", is something LLMs have trouble with. Probably because they're trained on human text, which ranges from accurate descriptions of reality to nonsense.
th0ma5 · 1h ago
Yes yes, with yet to be discovered holes
Lionga · 1h ago
Isn't an "agent" not just hallucinations layered on top of other random hallucinations to create new hallucinations?
tptacek · 1h ago
No, that's exactly what an agent isn't. What makes an agent an agent is all the not-LLM code. When an agent generates Golang code, it runs the Go compiler, which is in the agent's architecture an extension of the agent. The Go compiler does not hallucinate.
Lionga · 42m ago
The most common "agent" is an letting an LLM run a while loop (“multi-step agent”) [1]
That's not how Claude Code works (or Gemini, Cursor, or Codex).
ninetyninenine · 1h ago
Nah I don't agree with this characterization. The problem is, the majority of those hallucinations are true. What was said would make more sense if the majority of the responses were, in fact, false, but this is not the case.
xmprt · 1h ago
I think you're both correct but have different definitions of hallucinations. You're judging it as a hallucination based on the veracity of the output. Whereas Fowler is judging it based on the method by which the output is achieved. By that judgement, everything is a hallucination because the user cannot differentiate between when the LLM is telling the truth and isn't.
This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
ants_everywhere · 48m ago
an LLM hallucination is defined by its truth
> In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation,[1] or delusion)[2] is a response generated by AI that contains false or misleading information presented as fact.[3][4]
You say
> This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
For consistency you might as well say everything the human mind does is hallucination. It's the same sort of claim. This claim at least has the virtue of being taken seriously by people like Descartes.
Even the colloquial term outside of AI is characterized by the veracity of the output.
daviding · 43m ago
I get a lot of productivity out of LLMs so far, which for me is a simple good sign. I can get a lot done in a shorter time and it's not just using them as autocomplete. There is this nagging doubt that there's some debt to pay one day when it has too loose a leash, but LLMs aren't alone in that problem.
One thing I've done with some success is use a Test Driven Development methodology with Claude Sonnet (or recently GPT-5). Moving forward the feature in discrete steps with initial tests and within the red/green loop. I don't see a lot written or discussed about that approach so far, but then reading Martin's article made me realize that the people most proficient with TDD are not really in the Venn Diagram intersection of those wanting to throw themselves wholeheartedly into using LLMs to agent code. The 'super clippy' autocomplete is not the interesting way to use them, it's with multiple agents and prompt techniques at different abstraction levels - that's where you can really cook with gas. Many TDD experts have great pride in the art of code, communicating like a human and holding the abstractions in their head, so we might not get good guidance from the same set of people who helped us before. I think there's a nice green field of 'how to write software' lessons with these tools coming up, with many caution stories and lessons being learnt right now.
It feels like Tdd/llm connection is implied — “and also generate tests”. Thought it’s not cannonical tdd of course. I wonder if it’ll turn the tide towards tech that’s easier to test automatically, like maybe ssr instead of react.
daviding · 25m ago
Yep, it's great for generating tests and so much of that is boilerplate that it feels great value. As a super lazy developer it's great as the burden of all that mechanical 'stuff' being spat out is nice. Test code being like baggage feels lighter when it's just churned out as part of the process, as in no guilt just to delete it all when what you want to do changes. That in itself is nice. Plus of course MCP things (Playwright etc) for integration things is great.
But like you said, it was meant more TDD as 'test first' - so a sort of 'prompt-as-spec' that then produces the test/spec code first, and then go iterate on that. The code design itself is different as influenced by how it is prompted to be testable. So rather than go 'prompt -> code' it's more an in-between stage of prompting the test initially and then evolve, making sure the agent is part of the game of only writing testable code and automating the 'gate' of passes before expanding something. 'prompt -> spec -> code' repeat loop until shipped.
Scubabear68 · 29m ago
"Hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature".
I used to avidly read all his stuff, and I remember 20ish years ago he decided to rename Inversion of Control to Dependency Injection. In doing so, and his accompany blog, he showed he didn't actually understand it at a deep level (and hence his poor renaming).
This feels similar. I know what he's trying to say, but he's just wrong. He's trying to say the LLM is hallucinating everything, but Fowler is missing is that Hallucination in LLM terms refers to a very specific negative behavior.
ares623 · 24m ago
As far as an LLM is concerned, there is no difference between "negative" hallucination and a positive one. It's all just tokens and embeddings to it.
Positive hallucinations are more likely to happen nowadays, thanks to all the effort going into these systems.
rancar2 · 1h ago
My favorite quote to borrow: “Furthermore I think anyone who says they know what this future will be is talking from an inappropriate orifice.”
koolba · 1h ago
Reminds me of the classic Yogi Berra, “It's tough to make predictions, especially about the future”.
Towaway69 · 59m ago
Predicting the future isn't about being correct tomorrow, rather it’s about selling something to someone today.
An insight I picked up along the way…
th0ma5 · 1h ago
To me, it is more specific to say that many futurists don't take into account the social and economic network effects of changes taking place. Many just act as if the future will continue on completely unchallenged in this current state. But if you look at someone like Kurzweil, you can see the very narrow and specific focus of a prediction, which has proved to be more informative to me as a high bar of futurism.
skhameneh · 1h ago
There are many I've worked with that idolize Martin Fowler and have treated his words as gospel. That is not me and I've found it to be a nuisance, sometimes leading me to be overly critical of the actual content. As for now, I'm not working with such people and can appreciate the article shared without clouded bias.
I like this article, I generally agree with it. I think the take is good. However, after spending ridiculous amounts of time with LLMs (prompt engineering, writing tokenizers/samplers, context engineering, and... Yes... Vibe coding) for some periods 10 hour days into weekends, I have come to believe that many are a bit off the mark. This article is refreshing, but I disagree that people talking about the future are talking "from another orifice".
I won't dare say I know what the future looks like, but the present very much appears to be an overall upskilling and rework of collaboration. Just like every attempt before, some things are right and some are simply misguided. e.g. Agile for the sake of agile isn't any more efficient than any other process.
We are headed in a direction where written code is no longer a time sink. Juniors can onboard faster and more independently with LLMs, while seniors can shift their focus to a higher level in application stacks. LLMs have the ability to lighten cognitive loads and increase productivity, but just like any other productivity enhancing tool doing more isn't necessarily always better. LLMs make it very easy to create and if all you do is create [code], you'll create your own personal mess.
When I was using LLMs effectively, I found myself focusing more on higher level goals with code being less of a time sink. In the process I found myself spending more time laying out documentation and context than I did on the actual code itself. I spent some days purely on documentation and health systems to keep all content in check.
I know my comment is a bit sparse on specifics, I'm happy to engage and share details for those with questions.
manmal · 1h ago
> written code is no longer a time sink
It still is, and should be. It’s highly unlikely that you provided all the required info to the agent at first try. The only way to fix that is to read and understand the code thoroughly and suspiciously, and reshaping it until we’re sure it reflects the requirements as we understand them.
skhameneh · 59m ago
Vibe coding is not telling an agent what to do and checking back. It's an active engagement and best results are achieved when everything is planned and laid out in advance — which can also be done via vibe coding.
No, written code is no longer a time sink. Vibe coding is >90% building without writing any code.
The written code and actions are literally presented in diffs as they are applied, if one so chooses.
anskskbs · 22m ago
> It's an active engagement and best results are achieved when everything is planned and laid out in advance
The most efficient way to communicate these plans is in code. English is horrible in comparison.
When you’re using an agent and not reviewing every line of code, you’re offloading thinking to the AI. Which is fine in some scenarios, but often not what people would call high quality software.
Writing code was never the slow part for a competent dev. Agent swarming etc is mostly snake oil by those who profit off LLMs.
sfink · 52m ago
> We are headed in a direction where written code is no longer a time sink.
Written code has never been a time sink. The actual time that software developers have spent actually writing code has always been a very low percentage of total time.
Figuring out what code to write is a bigger deal. LLMs can help with part of this. Figuring out what's wrong with written code, and figuring out how to change and fix the code, is also a big deal. LLMs can help with a smaller part of this.
> Juniors can onboard faster and more independently with LLMs,
Color me very, very skeptical of this. Juniors previously spent a lot more of their time writing code, and they don't have to do that anymore. On the other hand, that's how they became not-juniors; the feedback loop from writing code and seeing what happened as a result is the point. Skipping part of that breaks the loop. "What the computer wrote didn't work" or "what the computer wrote is too slow" or even to some extent "what the computer wrote was the wrong thing" is so much harder to learn from.
Juniors are screwed.
> LLMs have the ability to lighten cognitive loads and increase productivity,
I'm fascinated to find out where this is true and where it's false. I think it'll be very unevenly distributed. I've seen a lot of silver bullets fired and disintegrate mid-flight, and I'm very doubtful of the latest one in the form of LLMs. I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.
skhameneh · 15m ago
> Figuring out what code to write is a bigger deal. LLMs can help with part of this. Figuring out what's wrong with written code, and figuring out how to change and fix the code, is also a big deal. LLMs can help with a smaller part of this.
I found exactly this is what LLMs are great at assisting with.
But, it also requires context to have guiding points for documentation. The starting context has to contain just enough overview with points to expand context as needed. Many projects lack such documentation refinement, which causes major gaps in LLM tooling (thus reducing efficacy and increasing unwanted hallucinations).
> Juniors are screwed.
Mixed, it's like saying "if you start with Python, you're going to miss lower level fundamentals" which is true in some regards. Juniors don't inherently have to know the inner workings, they get to skip a lot of the steps. It won't inherently make them worse off, but it does change the learning process a lot. I'd refute this by saying I somewhat naively wrote a tokenizer, because the >3MB ONNX tokenizer for Gemma written in JS seemed absurd. I went in not knowing what I didn't know and was able to learn what I didn't know through the process of building with an LLM. In other words, I learned hands on, at a faster pace, with less struggle. This is pretty valuable and will create more paths for juniors to learn.
Sure, we may see many lacking fundamentals, but I suppose that isn't so different from the criticism I heard when I wrote most of my first web software in PHP. I do believe we'll see a lot more Python and linguistic influenced development in the future.
> I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.
I entirely agree, in fact I think we're seeing it already. There is so much that's hyped and built around rough ideas that's glaringly inefficient. But FWIW inefficiency has less of an impact than adoption and interest. I could complain all day about the horrible design issues of languages and software that I actually like and use. I'd wager this will be no different. Thankfully, such progress in practice creates more opportunities for improvement and involvement.
bko · 1h ago
> Certainly if we ever ask a hallucination engine for a numeric answer, we should ask it at least three times, so we get some sense of the variation.
This works on people as well!
Cops do this when interrogating. You tell the same story three times, sometimes backwards. It's hard to keep track of everything if you're lying or you don't recall clearly so you can get a sense of confidence. Also works on interviews, ask them to explain a subject in three different ways to see if they truly understand.
Terr_ · 1h ago
> This works on people as well!
Only within certain conditions or thresholds that we're still figuring out. There are many cases where the more someone recalls and communicates their memory, the more details get corrupted.
> Cops do this when interrogating.
Sometimes that's not to "get sense of the variation" but to deliberately encourage a contradiction to pounce upon it. Ask me my own birthday enough times in enough ways and formats, and eventually I'll say something incorrect.
Care must also be taken to ensure that the questioner doesn't change the details, such as by encouraging (or sometimes forcing) the witness/suspect to imagine things which didn't happen.
chistev · 1h ago
Who remembers that scene on Better Call Saul between Lalo, Saul, and Kim?
nomilk · 1h ago
> We should ask the LLM the question more than once
For any macOS users, I highly recommend an Alfred workflow so you just press command + space then type 'llm <prompt>' and it opens tabs with the prompt in perplexity, (locally running) deepseek, chatgpt, claude and grok, or whatever other LLMs you want to add.
This approach satisfies Fowler's recommendation of cross referencing LLM responses, but is also very efficient and over time gives you a sense of which LLMs perform better for certain tasks.
anthem2025 · 8m ago
It’s funny how people acknowledge the railroads and the similarity to AI, but then jump back to comparing it to the internet when it comes to drawing conclusions.
The internet build out left massive amounts of useful infrastructure.
The railroads left us with lots of railroads that fell into disuse and eventually left us with a complete joke of a railway system. Made a few people so rich we started calling them robber barons and talking the gilded age.
Are we going to continue to use the 60 billion dollar data centers in Louisiana when the bubble bursts? Is it valuable infrastructure or just a waste of money that gets written off?
senko · 4m ago
[delayed]
crawshaw · 1h ago
A bubble is asset prices systematically diverging from reasonable expectations of future cash flows. Bubbles are driven by financial speculation.
The claim in the blog post that all technology leads to speculative asset bubbles I find hard to believe. Where was the electricity bubble? The steel bubble? The pre-war aviation bubble? (The aviation bubble appeared decades later due to changes in government regulation.)
Is this an AI bubble? I genuinely don't know! There is a lot of real uncertainty about future cash flows. Uncertainty is not the foundation of a bubble.
I knew dot-com was a bubble because you could find evidence, even before it popped. (A famous case: a company held equity in a bubble asset, and that company had a market cap below the equity it held, because the bubble did not extend to second-order investments.)
cake_robot · 1h ago
Just taking your first example, yes I believe you could characterize a lot of the investments in electrification as speculative bubbles even though there was underlying value that was borne out.
Besides electricity (answered above), there were bubbles related to steel production in its early days intertwined with the railroad bubble that drove huge investments in steel production for railroad development; when the railroad companies crashed it brought down steel as well; ultimately both survived in a more subdued form (as with the dot coms).
Before AI, we were trying to save money, but through a different technique: Prompting (overseas) humans.
After over a decade of trying that, we learned that had... flaws. So round 2: Prompting (smart) robots.
The job losses? This is just Offshoring 2.0; complete with everyone getting to re-learn the lessons of Offshoring 1.0.
the_af · 1h ago
> Prompting (overseas) humans [...] After over a decade of trying that, we learned that had... flaws.
I think this is a US-centric point of view, and seems (though I hope it's not!) slightly condescending to those of us not in the US.
Software engineering is more than what happens to US-based businesses and their leadership commanding hundreds or thousands of overseas humans. Offshoring in software is certainly a US concern (and to a lesser extent, other nations suffer it), but is NOT a universal problem of software engineering. Software engineering happens in multiple countries, and while the big money is in the US, that's not all there is to it.
Software engineering exists "natively" in countries other than the US, so any problems with it should probably (also) be framed without exclusive reference to the US.
gjsman-1000 · 1h ago
The problem isn't that there aren't high quality offshore developers - far from it. Or even high quality AI models.
The problems are inherent with outsourcing to a 3rd party and having little oversight. Oversight is, in both cases, way harder than it appears.
CuriouslyC · 1h ago
I'm sure the blacksmiths and weavers will find solace in that take. Their time will return!
insane_dreamer · 23m ago
> I’ve often heard, with decent reason, an LLM compared to a junior colleague. But I find LLMs are quite happy to say “all tests green”, yet when I run them, there are failures. If that was a junior engineer’s behavior, how long would it be before H.R. was involved?
Reminds me of a recent experience when I asked CC to implement a feature. It wrote some code that struck me as potentially problematic. When I said, "why did you do X? couldn't that be problematic?" it responded with "correct; that approach is not recommended because of Y; I'll fix it". So then why did it do it in the first place? A human dev might have made the same mistake, but it wouldn't have made the mistake knowing that it was making a mistake.
tricky_theclown · 25m ago
.
catigula · 1h ago
>I’m often asked, “what is the future of programming?” Should people consider entering software development now? Will LLMs eliminate the need for junior engineers? Should senior engineers get out of the profession before it’s too late? My answer to all these questions is “I haven’t the foggiest”
I just want to point out that this answer implicitly means that, at the very least, the profession is at least questionably uncertain which isn't a good sign for people with a long future orientation such as students.
mvieira38 · 1h ago
(slightly off-topic)
Students shouldn't ever have become so laser-focused on a single career path anyway, and even worse than that is how colleges have become glorified trade schools in our minds. Students should focus on studying for their classes, getting their electives and extracurriculars in, getting into clubs... Then depending on which circles they end up in, they shape their career that way. The thought that getting a Computer Science major would guarantee students a spot in the tech industry was always ridiculous, because the industry was just never structured that way, it's always been hacker groups, study groups, open source, etc. bringing out the best minds
tomku · 1h ago
It has never been anywhere close to certain, we just had 20 years of wild, unsustainable growth that encouraged people to cover their eyes and pretend the ride would go on forever. 20 years of telling everyone under the age of 30 that of course they should learn to code and that CS was the new medical or legal degree. 20 years of smugly acting like we are the inevitable future when we are, in fact, subject to the same ups and downs as every other career.
iLoveOncall · 1h ago
> One of the big problems with these surveys is that they aren’t taking into account how people are using the LLMs. From what I can tell the vast majority of LLM usage is fancy auto-complete, often using co-pilot.
This is a completely wrong assumption and negates a bunch of the points of the article...
dionian · 1h ago
> I’ve often heard, with decent reason, an LLM compared to a junior colleague. But I find LLMs are quite happy to say “all tests green”, yet when I run them, there are failures. If that was a junior engineer’s behavior, how long would it be before H.R. was involved?
A junior engineer can't write code anywhere nearly as fast. It's apples vs oranges. I can have the LLm rewrite the code 10 times until its correct and its much cheaper than hiring an obsequious jr engineer
xmprt · 1h ago
Junior engineers get better and learn from their mistakes. An LLM will happily make then same mistake 10 times if you try asking it to do something similar in 3 months. In the long term, I don't think LLMs actually save you time considering the amount of extra time spent reviewing/verifying its code and fixing tech debt.
swagasaurus-rex · 1h ago
If an AI can’t write the code after two attempts, I’ve never had success trying ten times
lubujackson · 1h ago
I like the idea of AI usage comes down to a measurement of "tolerances". With enough specificity, LLMs will 100% return what you want. The goal is to find the happy tolerance between "acceptable" and "I did it myself" via prompts.
manmal · 1h ago
> With enough specificity, LLMs will 100% return what you want.
By now I’m sure it won’t. Even if you provide the expected code verbatim, LLMs might go on a side quest to “improve” something.
krainboltgreene · 2h ago
> All major technological advances have come with economic bubbles, from canals and railroads to the internet.
Is this actually correct? I don't see any evidence for a "airflight bubble" or a "car bubble" or a "loom bubble" at the technologies' invention. Also the "canal bubble" wasn't about the technology, it was about the speculation on a series of big canals but we had been making canals for a long time. More importantly, even if it was correct, there are plenty of bubbles (if not significantly more) around things that didn't have value or tech that didn't matter.
sfink · 1h ago
Apologies for arguing from first principles, but for anything that spurs a lot of activity, the only alternative to a bubble is this: people ramp up investment and activity and enthusiasm only as much as the underlying thing can handle, then gradually taper off the increase and gently level off at the equilibrium "carrying capacity" of the new technology.
Does that sound like any human, ever, to you?
(The only time there isn't a bubble is when the thing just isn't that interesting to people and so there's never a big wave of uptake in the first place.)
krainboltgreene · 28m ago
If you were right then we'd have a lot more than a dozen listed bubbles on wikipedia.
That's an absurd framing for a cute quip.
mmmm2 · 2h ago
The Intelligent Investor by Graham talks about investors putting so much money into airlines and air freight that it became impossible to make a return. I don't know if you would call that a bubble, maybe just over exuberance.
tptacek · 1h ago
I don't know enough about the early history of the airline industry but there was very definitely a long series of huge bubbles in the railroad industry.
No comments yet
Marazan · 1h ago
The AI bubble also isn't about the technology.
No comments yet
atleastoptimal · 1h ago
There are only 3 things that I have strong empirical evidence for with respect to LLMs
1. Routinely some task or domain of work that some expert claims that LLM’s will able to do, LLM’s start being able to reliably perform that task within 6 months to a year, if they haven’t already
2. Whenever AI gets better, people move the goalposts regarding what “intelligence” counts as
3. Still, LLM’s reveal that there is an element to intelligence that is not orthogonal to the ability to do well on tests or benchmarks
No, they're like an extremely experienced and knowledgeable senior colleague – who drinks heavily on the job. Overconfident, forgetful, sloppy, easily distracted. But you can hire so many of them, so cheaply, and they don't get mad when you fire them!
> Maybe LLMs mark the point where we join our engineering peers in a world on non-determinism.
Those other forms of engineering have no choice due to the nature of what they are engineering.
Software engineers already have a way to introduce determinism into the systems they build! We’re going backwards!
There’s a beautiful invitation to learn and contribute baked into a world where each command is fully deterministic and spec-ed out.
Yes, there have always been poorly documented black boxes, but I thought the goal was to minimize those.
People don’t understand how much is going to be lost if that goal is abandoned.
The more practical question is though, does that matter? Maybe not.
I think it matters quite a lot.
Specifically for knowledge preservation and education.
For example, web requests are non-deterministic. They depend, among other things, on the state of the network. They also depend on the load of the machine serving the request.
One way to think about this is: how easy is it for you to produce byte-for-byte deterministic builds of the software you're working on? If it's not trivial there's more non-determinism than is obvious.
How can that be extralopated with LLMs? How does a system independently know that it's arrived at a correct answer within a timeout or not? Has the halting problem been solved?
The engineers at TSMC, Intel, Global Foundries, Samsung, and others have done us an amazing service, and we are throwing all that hard work away.
Was it going backwards when the probabilistic nature of quantum mechanics emerged?
More seriously, this is not a fair comparison. Adding LLM output to your source code is not analogical to quantum physics; it is analogical to letting your 5 years old child transcribe the experimentally measured values without checking and accepting that many of them will be transcribed wrong.
And on the side hand, no transistors without quantum mechanics.
Nice.
It implies that some parts of the output aren’t hallucinations, when the reality is that none of it has any thought behind it.
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
Salience (https://en.wikipedia.org/wiki/Salience_(neuroscience)), "the property by which some thing stands out", is something LLMs have trouble with. Probably because they're trained on human text, which ranges from accurate descriptions of reality to nonsense.
[1] https://huggingface.co/docs/smolagents/conceptual_guides/int...
This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
> In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation,[1] or delusion)[2] is a response generated by AI that contains false or misleading information presented as fact.[3][4]
You say
> This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
For consistency you might as well say everything the human mind does is hallucination. It's the same sort of claim. This claim at least has the virtue of being taken seriously by people like Descartes.
https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...
One thing I've done with some success is use a Test Driven Development methodology with Claude Sonnet (or recently GPT-5). Moving forward the feature in discrete steps with initial tests and within the red/green loop. I don't see a lot written or discussed about that approach so far, but then reading Martin's article made me realize that the people most proficient with TDD are not really in the Venn Diagram intersection of those wanting to throw themselves wholeheartedly into using LLMs to agent code. The 'super clippy' autocomplete is not the interesting way to use them, it's with multiple agents and prompt techniques at different abstraction levels - that's where you can really cook with gas. Many TDD experts have great pride in the art of code, communicating like a human and holding the abstractions in their head, so we might not get good guidance from the same set of people who helped us before. I think there's a nice green field of 'how to write software' lessons with these tools coming up, with many caution stories and lessons being learnt right now.
edit: heh, just saw this now, there you go - https://news.ycombinator.com/item?id=45055439
But like you said, it was meant more TDD as 'test first' - so a sort of 'prompt-as-spec' that then produces the test/spec code first, and then go iterate on that. The code design itself is different as influenced by how it is prompted to be testable. So rather than go 'prompt -> code' it's more an in-between stage of prompting the test initially and then evolve, making sure the agent is part of the game of only writing testable code and automating the 'gate' of passes before expanding something. 'prompt -> spec -> code' repeat loop until shipped.
I used to avidly read all his stuff, and I remember 20ish years ago he decided to rename Inversion of Control to Dependency Injection. In doing so, and his accompany blog, he showed he didn't actually understand it at a deep level (and hence his poor renaming).
This feels similar. I know what he's trying to say, but he's just wrong. He's trying to say the LLM is hallucinating everything, but Fowler is missing is that Hallucination in LLM terms refers to a very specific negative behavior.
Positive hallucinations are more likely to happen nowadays, thanks to all the effort going into these systems.
An insight I picked up along the way…
I like this article, I generally agree with it. I think the take is good. However, after spending ridiculous amounts of time with LLMs (prompt engineering, writing tokenizers/samplers, context engineering, and... Yes... Vibe coding) for some periods 10 hour days into weekends, I have come to believe that many are a bit off the mark. This article is refreshing, but I disagree that people talking about the future are talking "from another orifice".
I won't dare say I know what the future looks like, but the present very much appears to be an overall upskilling and rework of collaboration. Just like every attempt before, some things are right and some are simply misguided. e.g. Agile for the sake of agile isn't any more efficient than any other process.
We are headed in a direction where written code is no longer a time sink. Juniors can onboard faster and more independently with LLMs, while seniors can shift their focus to a higher level in application stacks. LLMs have the ability to lighten cognitive loads and increase productivity, but just like any other productivity enhancing tool doing more isn't necessarily always better. LLMs make it very easy to create and if all you do is create [code], you'll create your own personal mess.
When I was using LLMs effectively, I found myself focusing more on higher level goals with code being less of a time sink. In the process I found myself spending more time laying out documentation and context than I did on the actual code itself. I spent some days purely on documentation and health systems to keep all content in check.
I know my comment is a bit sparse on specifics, I'm happy to engage and share details for those with questions.
It still is, and should be. It’s highly unlikely that you provided all the required info to the agent at first try. The only way to fix that is to read and understand the code thoroughly and suspiciously, and reshaping it until we’re sure it reflects the requirements as we understand them.
No, written code is no longer a time sink. Vibe coding is >90% building without writing any code.
The written code and actions are literally presented in diffs as they are applied, if one so chooses.
The most efficient way to communicate these plans is in code. English is horrible in comparison.
When you’re using an agent and not reviewing every line of code, you’re offloading thinking to the AI. Which is fine in some scenarios, but often not what people would call high quality software.
Writing code was never the slow part for a competent dev. Agent swarming etc is mostly snake oil by those who profit off LLMs.
Written code has never been a time sink. The actual time that software developers have spent actually writing code has always been a very low percentage of total time.
Figuring out what code to write is a bigger deal. LLMs can help with part of this. Figuring out what's wrong with written code, and figuring out how to change and fix the code, is also a big deal. LLMs can help with a smaller part of this.
> Juniors can onboard faster and more independently with LLMs,
Color me very, very skeptical of this. Juniors previously spent a lot more of their time writing code, and they don't have to do that anymore. On the other hand, that's how they became not-juniors; the feedback loop from writing code and seeing what happened as a result is the point. Skipping part of that breaks the loop. "What the computer wrote didn't work" or "what the computer wrote is too slow" or even to some extent "what the computer wrote was the wrong thing" is so much harder to learn from.
Juniors are screwed.
> LLMs have the ability to lighten cognitive loads and increase productivity,
I'm fascinated to find out where this is true and where it's false. I think it'll be very unevenly distributed. I've seen a lot of silver bullets fired and disintegrate mid-flight, and I'm very doubtful of the latest one in the form of LLMs. I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.
I found exactly this is what LLMs are great at assisting with.
But, it also requires context to have guiding points for documentation. The starting context has to contain just enough overview with points to expand context as needed. Many projects lack such documentation refinement, which causes major gaps in LLM tooling (thus reducing efficacy and increasing unwanted hallucinations).
> Juniors are screwed.
Mixed, it's like saying "if you start with Python, you're going to miss lower level fundamentals" which is true in some regards. Juniors don't inherently have to know the inner workings, they get to skip a lot of the steps. It won't inherently make them worse off, but it does change the learning process a lot. I'd refute this by saying I somewhat naively wrote a tokenizer, because the >3MB ONNX tokenizer for Gemma written in JS seemed absurd. I went in not knowing what I didn't know and was able to learn what I didn't know through the process of building with an LLM. In other words, I learned hands on, at a faster pace, with less struggle. This is pretty valuable and will create more paths for juniors to learn.
Sure, we may see many lacking fundamentals, but I suppose that isn't so different from the criticism I heard when I wrote most of my first web software in PHP. I do believe we'll see a lot more Python and linguistic influenced development in the future.
> I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.
I entirely agree, in fact I think we're seeing it already. There is so much that's hyped and built around rough ideas that's glaringly inefficient. But FWIW inefficiency has less of an impact than adoption and interest. I could complain all day about the horrible design issues of languages and software that I actually like and use. I'd wager this will be no different. Thankfully, such progress in practice creates more opportunities for improvement and involvement.
This works on people as well!
Cops do this when interrogating. You tell the same story three times, sometimes backwards. It's hard to keep track of everything if you're lying or you don't recall clearly so you can get a sense of confidence. Also works on interviews, ask them to explain a subject in three different ways to see if they truly understand.
Only within certain conditions or thresholds that we're still figuring out. There are many cases where the more someone recalls and communicates their memory, the more details get corrupted.
> Cops do this when interrogating.
Sometimes that's not to "get sense of the variation" but to deliberately encourage a contradiction to pounce upon it. Ask me my own birthday enough times in enough ways and formats, and eventually I'll say something incorrect.
Care must also be taken to ensure that the questioner doesn't change the details, such as by encouraging (or sometimes forcing) the witness/suspect to imagine things which didn't happen.
For any macOS users, I highly recommend an Alfred workflow so you just press command + space then type 'llm <prompt>' and it opens tabs with the prompt in perplexity, (locally running) deepseek, chatgpt, claude and grok, or whatever other LLMs you want to add.
This approach satisfies Fowler's recommendation of cross referencing LLM responses, but is also very efficient and over time gives you a sense of which LLMs perform better for certain tasks.
The internet build out left massive amounts of useful infrastructure.
The railroads left us with lots of railroads that fell into disuse and eventually left us with a complete joke of a railway system. Made a few people so rich we started calling them robber barons and talking the gilded age.
Are we going to continue to use the 60 billion dollar data centers in Louisiana when the bubble bursts? Is it valuable infrastructure or just a waste of money that gets written off?
The claim in the blog post that all technology leads to speculative asset bubbles I find hard to believe. Where was the electricity bubble? The steel bubble? The pre-war aviation bubble? (The aviation bubble appeared decades later due to changes in government regulation.)
Is this an AI bubble? I genuinely don't know! There is a lot of real uncertainty about future cash flows. Uncertainty is not the foundation of a bubble.
I knew dot-com was a bubble because you could find evidence, even before it popped. (A famous case: a company held equity in a bubble asset, and that company had a market cap below the equity it held, because the bubble did not extend to second-order investments.)
https://en.wikipedia.org/wiki/Public_Utility_Holding_Company...
avation: likewise there was the "Lindberg Boom" https://en.wikipedia.org/wiki/Lindbergh_Boom which led to overspeculation and the crash of many early aviation companies
Before AI, we were trying to save money, but through a different technique: Prompting (overseas) humans.
After over a decade of trying that, we learned that had... flaws. So round 2: Prompting (smart) robots.
The job losses? This is just Offshoring 2.0; complete with everyone getting to re-learn the lessons of Offshoring 1.0.
I think this is a US-centric point of view, and seems (though I hope it's not!) slightly condescending to those of us not in the US.
Software engineering is more than what happens to US-based businesses and their leadership commanding hundreds or thousands of overseas humans. Offshoring in software is certainly a US concern (and to a lesser extent, other nations suffer it), but is NOT a universal problem of software engineering. Software engineering happens in multiple countries, and while the big money is in the US, that's not all there is to it.
Software engineering exists "natively" in countries other than the US, so any problems with it should probably (also) be framed without exclusive reference to the US.
The problems are inherent with outsourcing to a 3rd party and having little oversight. Oversight is, in both cases, way harder than it appears.
Reminds me of a recent experience when I asked CC to implement a feature. It wrote some code that struck me as potentially problematic. When I said, "why did you do X? couldn't that be problematic?" it responded with "correct; that approach is not recommended because of Y; I'll fix it". So then why did it do it in the first place? A human dev might have made the same mistake, but it wouldn't have made the mistake knowing that it was making a mistake.
I just want to point out that this answer implicitly means that, at the very least, the profession is at least questionably uncertain which isn't a good sign for people with a long future orientation such as students.
Students shouldn't ever have become so laser-focused on a single career path anyway, and even worse than that is how colleges have become glorified trade schools in our minds. Students should focus on studying for their classes, getting their electives and extracurriculars in, getting into clubs... Then depending on which circles they end up in, they shape their career that way. The thought that getting a Computer Science major would guarantee students a spot in the tech industry was always ridiculous, because the industry was just never structured that way, it's always been hacker groups, study groups, open source, etc. bringing out the best minds
This is a completely wrong assumption and negates a bunch of the points of the article...
A junior engineer can't write code anywhere nearly as fast. It's apples vs oranges. I can have the LLm rewrite the code 10 times until its correct and its much cheaper than hiring an obsequious jr engineer
By now I’m sure it won’t. Even if you provide the expected code verbatim, LLMs might go on a side quest to “improve” something.
Is this actually correct? I don't see any evidence for a "airflight bubble" or a "car bubble" or a "loom bubble" at the technologies' invention. Also the "canal bubble" wasn't about the technology, it was about the speculation on a series of big canals but we had been making canals for a long time. More importantly, even if it was correct, there are plenty of bubbles (if not significantly more) around things that didn't have value or tech that didn't matter.
Does that sound like any human, ever, to you?
(The only time there isn't a bubble is when the thing just isn't that interesting to people and so there's never a big wave of uptake in the first place.)
That's an absurd framing for a cute quip.
No comments yet
No comments yet
1. Routinely some task or domain of work that some expert claims that LLM’s will able to do, LLM’s start being able to reliably perform that task within 6 months to a year, if they haven’t already
2. Whenever AI gets better, people move the goalposts regarding what “intelligence” counts as
3. Still, LLM’s reveal that there is an element to intelligence that is not orthogonal to the ability to do well on tests or benchmarks
No comments yet