Building AI products in the probabilistic era

79 sdan 46 8/21/2025, 6:42:10 PM giansegato.com ↗

Comments (46)

ankit219 · 2h ago
Building with non deterministic systems isnt new. It does not take a scientist. Though people who have experience with these systems are fewer in number today. You saw the same thing with TCP/IP development where we ended up developing systems that assumed the randomness and made sure that isnt passed on to the next layer. For every game, given the latency involved in previous networks, there is no way on the network games were deterministic.
pdhborges · 3h ago
I will believe this theory if someone shows me that the ratio of scientists to engineers of leading teams of the leading companies deploying AI products is bigger than 1.
layer8 · 2h ago
I don’t think the dichotomy between scientists and engineers that’s being established here is making much sense in the first place. Applied science is applied science.
thorum · 59m ago
I like this framing, but I don’t think it’s entirely new to LLMs. Humans have been building flexible, multi-purpose tools and using them for things the original inventor or manufacturer didn’t think of since before the invention of the wheel. It’s in our DNA. Our brains have been shaped by a world where that is normal.

The rigidness and near-perfect reliability of computer software is the unusual thing in human history, an outlier we’ve gotten used to.

therobots927 · 50m ago
“The rigidness and near-perfect reliability of computer software is the unusual thing in human history, an outlier we’ve gotten used to.”

Ordered approximately by recency:

Banking? Clocks? Roman aqueducts? Mayan calendars? The sun rising every day? Predictable rainy and dry season?

How is software the outlier here?

patrickscoleman · 1h ago
Great read. We've been seeing some wild emergent behavior at Rime (tts voice ai) too, e.g. training the model to <laugh> and it being able to <sigh>.
bithive123 · 1h ago
It became evident to me while playing with Stable Diffusion that it's basically a slot machine. A skinner box with a variable reinforcement schedule.

Harmless enough if you are just making images for fun. But probably not an ideal workflow for real work.

diggan · 12m ago
> It became evident to me while playing with Stable Diffusion that it's basically a slot machine.

It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.

A4ET8a8uTh0_v2 · 1h ago
<< But probably not an ideal workflow for real work.

Hmm. Ideal is rarely an option, so I have to assume you are being careful about phrasing.

Still, despite it being a black box, one can still tip the odds on one's favor, so the real question is what is considered 'real work'? I personally would define that as whatever they you are being paid to do. If that premise is accepted, then the tool is not the issue, despite its obvious handicaps.

therobots927 · 2h ago
This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:

“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”

This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:

“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”

Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.

I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:

“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”

I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.

AgentMatt · 1h ago
> I’d recommend Baudrillards work on hyperreality.

Any specific piece of writing you can recommend? I tried reading Simulacra and Simulation (English translation) a while ago and I found it difficult to follow.

therobots927 · 1h ago
I would actually recommend the YouTube channel Plastic Pills. This is a great video to start with: https://youtu.be/S96e6TdJlNE?si=gSVzXyyBq7t_q0Xp
rexer · 2h ago
I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.

> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

Can you say more? It seems to me the article says the same thing you are.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.

In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.

aredox · 2h ago
Again there is a match between programs and the structure that creates it (a.k.a. Conway's law). This society not only tolerates but embraces bullshit, it has elected a complete con man, and now it is sinking billions of dollars building universal bullshit machines.
therobots927 · 2h ago
Exactly right. LLMs are a natural product of our post truth society. I’ve given up hope that things get better but maybe they will once the decline becomes more tangible. I just hope it involves less famine than previous systemic collapses.
nutjob2 · 1h ago
> I’m not saying future computing won’t be probabilistic

Current and past computing has always been probabilistic in part, doesn't mean it will become 100% so. Almost all of the implementationof LLMs is deterministic except the part that is randomized. Its output is used in the same way. Humans combine the two approaches as well. Even reality is a combination of quantum uncertainty at a low level and very deterministic physics everywhere else.

> We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.

The hype machine always involves pseudo-scientific babble and this is a particularly cringey example. The idea that seems to be promoted, that AI will be god like and therein we'll find all truth and knowledge is beyond delusional.

It a tool, like all other tools. Just like we see faces in everything we're also very susceptible to language (especially our own, consumed and regurgitated back to us) from a very neat chatbot.

AI hype is borderline mass hysteria at this point.

A4ET8a8uTh0_v2 · 54m ago
<< The idea that seems to be promoted, that AI will be god like and therein we'll find all truth and knowledge is beyond delusional.

I did see strands of it and I will admit that it made me hesitate over what could wait for us beyond the horizon.

<< AI hype is borderline mass hysteria at this point.

It is absolutely an interesting symptom of something within us. I wouldn't call it yearning, but something akin to drive.

I personally find it weirdly useful. But then I occasionally stop myself to see if it just managed to convince it is being useful, and I am not sure how common it is for people to run their stuff through some criticism to uncover more obvious issues with reasoning.

To your main point, even if you are right, a real push is there to use it. As in, last week my boss explicitly told us to use company's internal tool. So.. I do. In the meantime, I am looking for an exit.

therobots927 · 53m ago
“The hype machine always involves pseudo-scientific babble and this is a particularly cringey example.”

Thanks for confirming. As crazy as the chatbot fanatics are, hearing them talk makes ME feel crazy.

falcor84 · 2h ago
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

But as per Gödel's incompleteness theorem and the Halting Problem, math questions (and consequently physics and CS questions) don't always have an answer.

therobots927 · 2h ago
Providing examples of questions without correct answers does not prove that no questions have correct answers. Or that it’s hallucinations aren’t problematic when they provide explicitly incorrect answers. The author is just avoiding addressing the hallucination problem at all by saying “well sometimes there is no correct answer”
layer8 · 2h ago
There is a truth of the matter regarding whether a program will eventually halt or not, even when there is no computable proof for either case. Similar for the incompleteness theorems. The correct response in such cases is “I don’t know”.
therobots927 · 2h ago
You know something I don’t hear a lot from chatGPT? “I don’t know”
A4ET8a8uTh0_v2 · 52m ago
True. Seemingly, based on my experiences, the instructions are to keep individual engaged for as long as possible. I hate to say it, but it works too.
novok · 1h ago
What exactly was rewritten in 3 weeks at replit? Literally everything? The agent part?
ipdashc · 2h ago
While this article is a little overenthusiastic for my taste, I think I agree with the general idea of it - and it's always kind of been my pet peeve when it comes to ML. It's a little depressing to think that's probably where the industry is heading. Does anyone feel the same way?

A lot of the stuff the author says resonates deeply, but like, the whole deterministism thing is why I liked programming and computers in the first place. They are complicated but simple; they run on straightforward, man-made rules. As the article says:

> Any good engineer will know how the Internet works: we designed it! We know how packets of data move around, we know how bytes behave, even in uncertain environments like faulty connections.

I've always loved this aspect of it. We humans built the entire system, from protocols down to transistors (and the electronics/physics is so abstracted away it doesn't matter). If one wants to understand or tweak some aspect of it, with enough documentation or reverse engineering, there is nothing stopping you. Everything makes sense.

The author is spot on; every time I've worked with ML it feels more like you're supposed to be a scientist than an engineer, running trials and collecting statistics and tweaking the black box until it works. And I hate that. Props to those who can handle real fields like biology or chemistry, right, but I never wanted to be involved with that kind of stuff. But it seems like that's the direction we're inevitably going.

ACCount37 · 1h ago
ML doesn't work like programming because it's not programming. It just happens to run on the same computational substrate.

Modern ML is at this hellish intersection of underexplored math, twisted neurobiology and applied demon summoning. An engineer works with known laws of nature - but the laws of machine learning are still being written. You have to be at least a little bit of a scientist to navigate this landscape.

Unfortunately, the nature of intelligence doesn't seem to yield itself to simple, straightforward, human-understandable systems. But machine intelligence is desirable. So we're building AIs anyway.

mentalgear · 3h ago
> After decades of technical innovation, the world has (rightfully) developed some anti-bodies to tech hype. Mainstream audiences have become naturally skeptical of big claims of “the world is changing”.

Well, it took about 3 years of non-stop AI hype from the industry and press (and constant ignoring of actual experts) until finally the perception seems to have shifted in recognising it as another bubble. So I wouldn't say any lessons were learned. Get ready for the next bubble when the crypto grifters that moved to "AI" will soon move on the to the NEXT-BIG-THING!

brookst · 29m ago
A technology's long term value has absolutely zero relationship to whether there is a bubble at any moment. Real estate has had many bubbles. That doesn't mean real estate is worthless, or that it won't appreciate.

Two propositions that can both be true, and which I believe ARE both true:

1. AI is going to change the world and eat many industries to an even greater degree than software did

2. Today, AI is often over-hyped and some combo of grifters, the naive, and gamblers are driving a bubble that will pop at some point

ACCount37 · 58m ago
There's been non-stop talk of "AI bubble" for 3 years now. Frontier AI systems keep improving in the meanwhile.

Clearly, a lot of people very desperately want AI tech to fail. And if there is such a strong demand for "tell me that AI tech will fail", then there will be hacks willing to supply. I trust most of those "experts" as far as I can throw them.

ivape · 2h ago
”… finally the perception seems to have shifted in recognising it as another bubble”

Who recognized this exactly? The MIT article? Give me a break. NVDA was $90 this year, was that the world recognizing AI was a bubble? No one is privy to anything when it comes to this. Everyone is just going to get blindsided again and again every time they sleep on this stuff.

lacy_tinpot · 2h ago
Is it really "hype" if like 100s of millions of people are using llms on a daily basis?
PaulRobinson · 2h ago
It’s not the usage that’s the problem. It’s the valuations.
nutjob2 · 1h ago
Absolutely, it's just that some are just kidding themself as to what it can do now and in the future.
pmg101 · 2h ago
The dot-com bubble burst but I'm betting you visited at least one of those "websites" they were hyping today.
failiaf · 3h ago
(unrelated) what's the font used for the cursive in the article? the heading is ibm plex serif and the content dm mono, but the cursive font is simply labeled as dm mono which isn't accurate
nbbaier · 3h ago
Seems to be Dank Mono Regular Italic: https://philpl.gumroad.com/l/dank-mono
failiaf · 3h ago
oh! i mistook 'dm' to be 'dm mono', but this appears to be correct
leutersp · 3h ago
Chrome Dev console shows that the italics font is indeed named "dm" just like the rest of the content. It is not really a cursive, only a few letters are stylized ("f", "s" and "l").

It is possible (and often desirable) to use different WOFF fonts for italics, and they can look quite different from the standard font.

AIorNot · 3h ago
From the article:

“We have a class of products with deterministic cost and stochastic outputs: a built-in unresolved tension. Users insert the coin with certainty, but will be uncertain of whether they'll get back what they expect. This fundamental mismatch between deterministic mental models and probabilistic reality produces frustration — a gap the industry hasn't yet learned to bridge.”

And all the news today around AI being a bubble -

We’re still learning what we can do with these models and how to evaluate them but industry and capitalism forces our hand into building sellable products rapidly

CGMthrowaway · 2h ago
It's like putting money into a (potentially) rigged slot machine
ACCount37 · 1h ago
It's like paying a human to do something.

Anyone who thinks humans are reliable must have never met one.

hodgehog11 · 3h ago
adidoit · 1h ago
No it isn't...the author is talking about products and building with a completely different mindset to deterministic software.

The bitter lesson is about model level performance improvements and the futility of scaffolding in the face of search and scaling.

hodgehog11 · 48m ago
It isn't clear to me why these are so different? The alternative mindset to deterministic software is to use probabilistic models. The common mentality is that deterministic software takes developer knowledge into account. This becomes less effective in the big data era, that's the bitter lesson, and that's why the shift is taking place. I'm not saying that the article is the bitter lesson restated, but I'm saying that this is a realisation of that lesson.
greymalik · 2h ago
How so?
chaos_emergent · 2h ago
Author advocates for building general purpose systems that can accomplish goals within some causal boundary given relevant constraints, versus highly deterministic logical flows that are created from priors like intuition or user research.

Parallel with the bitter lesson being that general purpose algorithms that use search and learning leveraged by increasing computational capacity tend to beat out specialized methods that exploit intuition about how cognitive processes work.