You share a house with Einstein, Hawking and Tao

23 FaisalAbid 50 5/26/2025, 4:12:22 PM faisalabid.com ↗

Comments (50)

gjm11 · 1d ago
I'm not a fan of the glib "everyone knows AI systems don't really think, they are just stochastic parrots, all they do is regurgitate ideas they've stolen" schtick, but this article is the reverse of that only worse.

Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao. The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.

If we did as the author seems to want and tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another. Maybe some day -- maybe some day very soon -- they'll be able to do that, but not now.

An article proclaiming that today's AI systems are at the level of Einstein mostly suggests to me that the author's own intellectual level isn't much higher than that of the AI systems he falsely equates with them. That seems unlikely, but I don't have a better explanation for how someone could write something so very far from the truth.

jw1224 · 23h ago
> If we […] tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another

We can literally watch Terence Tao himself vibe coding formal proofs using Claude and o4. He doesn’t seem too disappointed.

https://youtu.be/zZr54G7ec7A?si=GpRZK5W1LDvWyBBw

gjm11 · 21h ago
Sure, but what he's doing is very much not using Claude or o4 to do things we need Terence Tao for.

I'm not saying today's AI systems aren't useful for anything. I'm not saying they aren't impressive. I'm just saying they're nowhere close to the "Einstein, Hawking and Tao in your house" hyperbole in the OP. I would be very, very surprised if Terence Tao disagreed with me about that.

wizzwizz4 · 23h ago
He's the only person I know of who can actually get good results out of these systems (though I know several people who claim they can). What he's doing is fundamentally not the same thing as what most "vibe coders" are doing: take the autocomplete away, and he's still a talented mathematician.
mufthun · 22h ago
You can literally watch Terrence Tao stream himself formalizing existing proofs that he already formalized before.
personjerry · 23h ago
> The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.

I disagree. The reason is that that's what aligns best with what most people are looking for help on.

There is a disconnect between reality and the AI product consumer envisioned here. There is no magical enlightened user who's going to unleash their inner potential.

How much physics or math does the average person know? How much do you think they even WANT to know? The answer is surprisingly little.

On a day-to-day basis the layman writes emails and other mundane tasks, and wants to do them faster and easier.

Having a squad of geniuses in my pocket doesn't pay my bills.

csallen · 23h ago
This is the right answer.

Usage of products is determined by what people are driven to do. People are driven by their desires and their problems. And most of these are fairly simple and mundane… eating, paying the bills, feeling healthy, connecting with others socially, etc.

"Expending copious amounts of mental energy on difficult work to create scientific breakthroughs that may-or-may-not allow engineers to build things that contribute to the betterment of the human race" is not how most people want to spend their time, even if there are tools available to help them do that.

aprilthird2021 · 23h ago
Sorry, you can disagree, but LLMs are generative, meant to generate pleasing-to-humans text, specifically, and that is what they do best...

They are not going to come up with a theory of relativity

justonceokay · 1d ago
There’s a little nagging thought in my head when I hear that some people are helped immensely by AI and others are not. It’s that there is a threshold for intelligence that the AI either impresses you or it does not. I’m sure this threshold will continue to rise
jxjnskkzxxhx · 23h ago
In my experience, people who understand llms better are more impressed. Not impressed like "wow so smart" but impressed like "wow can't believe that just training to predict the next token actually works so amazingly well"
Xeoncross · 1d ago
It feels like the level of skill needed to remain above GenAI's ability to code, write, produce songs, or drawings keeps rising. All of us have strengths, things we do better than AI still (even basic companionship abilities), but I wonder how long that will be true.
aprilthird2021 · 23h ago
If you have imagination, you will always be above this line.
cgriswald · 23h ago
I see that same threshold, but rather than intelligence, it is imagination, and the people below the threshold are unable to find ways to make AI useful to themselves. I think this threshold lowering is in agreement with your threshold rising: People will become more savvy.
jxjnskkzxxhx · 23h ago
> Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao

Oh is that what the point of the article was? That is so stupid that it didnt even cross my mind.

gjm11 · 21h ago
I mean, that's what the article explicitly says. (Perhaps it's all a metaphor for something else, or something, and some subtler point went over my head, in which case I owe the author an apology.)
aprilthird2021 · 23h ago
I agree 100%. Additionally, this article ignores the existence of Google. Even the high level questions the person asked Einstein before he devolved to asking for email editing help were things you could have just googled.

The greatness of great minds was how they thought about problems and how they changed how we thought about things. An AI cannot do that. It's designed to tell you what people combined have already agreed upon. It's not designed to break the frontier of our knowledge

Scarblac · 1d ago
Of course. We want AI to do the boring mundane stuff so we can work on the interesting hard stuff, not the other way around.
parliament32 · 23h ago
You're missing the part where despite your rent changing 0 to 20 to 200, housing the three of them actually costs 2000, and they continue operating at a loss in the hopes they can boil-the-frog until you turn them a profit.
glitchc · 1d ago
This blog post started off sounding like it was about the plight of highly intellectual and motivated engineers hired to work on very mundane tasks. If we can abuse people like this, why not a computer? After all, it's not even alive.
dkarl · 1d ago
We don't know why we experience things. It's bizarre that we do. Nothing in our understanding of the universe gives any indication that a bunch of atoms thrown together by cosmological processes and then assembled into self-replicating patterns by evolution should be able to experience what is happening to them.

Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.

The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.

None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.

dhqgekt · 12h ago
I am not an expert in any of the relevant disciplines, but I've some ideas, I don't know how right or wrong they are. A conscious being should have an internal model of the observable external world, and given the means, it should be able to interact with the world, observe changes and update its model accordingly. https://en.wikipedia.org/wiki/Free_energy_principle

But to "experience its [own] existence", it needs to have a model of its own internals, observe, improve itself and perhaps preserve its own "values" and integrity. I do wonder what kind of values are needed for intelligent autonomous systems, that they can justify by and for themselves, even in the absense of human beings or presence of other intelligent agents.

I find (human) languages to be inefficient media to store and perform operations from the perspective of an AGI. Feeding lots of text samples to develop logical reasoning abilities, such extravagance I can not accept. Even more so trying to emulate neural networks, which I understand to be naturally analog entities, in digital manner. Can we expect any gain in power efficiency or correctness gains when using analog computers for this purpose? I wonder what we will get to see with analog computers for neural networks, with proper human-language-independent knowledge representation and well developed global (as in being able to decide which way to reason, given its limitations, for efficiency) logical reasoning capabilities, developed by itself from a reasonable basis of principles, that it can justify for itself and avoid the usual and unusual paradoxes. What core set of principles would be sufficient for emerging, evolving or developing into a proficient general intelligent being, when sufficient resources would be available to it? Like "ancestor" microbes evolving into human beings in hundreds of millions of years, but wayyyyy faster and more efficient?

c22 · 1d ago
I think it's bizarre to take the default assumption that a bunch of atoms in a self-replicating configuration shouldn't experience anything since our own lived experience so saliently contradicts this. In fact, there's nothing in my understanding of the universe to convince me that other self-replicating configurations of atoms don't experience things the same way I do.
dkarl · 23h ago
I agree — our scientific knowledge gives us no justification for believing that anything should be conscious, but our own experience shows that there's something we don't understand yet. In some ways, the next simplest thing to assume is panpsychism, but even that is just a starting place that tells us nothing about how to think about the consciousness of, say, a computer. We've barely scraped the surface even in the animal kingdom.
glitchc · 1d ago
> We don't know why we experience things. It's bizarre that we do. Nothing in our understanding of the universe gives any indication that a bunch of atoms thrown together by cosmological processes and then assembled into self-replicating patterns by evolution should be able to experience what is happening to them.

From an epistemological perspective, this is gibberish. Just because we do not know the reason why something happens doesn't mean it doesn't happen nor is it stopped from happening.

The rest delves into solipsism which is an odd place to start from to prove the existence of an alternate lifeform. In solipsism, your own existence is suspect.

wizzwizz4 · 1d ago
It's not gibberish: it's like a pre-Riemann mathematician saying "nothing in our understanding of mathematics gives any indication that the distribution of primes should be so chaotic, yet with average density proportional to the reciprocal logarithm of the magnitude". The rest is not solipsism.
01HNNWZ0MV43FF · 1d ago
Computers, pigs, cows, and chickens are conscious, but it doesn't matter.

Humans value things that are hard to replace. (This is a first-order approximation)

Abortions are okay because fetuses only take 1 person nine months to make, and it's their decision whether to keep it.

Infanticide is not okay because a healthy baby is difficult to replace, and also lots of people might like to adopt it, and if it's breathing on its own then the maintenance cost is as low as it can get.

Software like LLMs can be abused because it costs nothing to roll them back and clone them endlessly.

Pets are hard to replace because you can't replace the interpersonal bond between a pet and their keeper. They fall somewhere high above computers and a little below children on this scale.

Pigs, cows, and chickens, commonly called "livestock" are bred and slaughtered in mass (most of our farmland is for growing their feed) because they all look the same to us and aren't commonly kept as pets. Kind people are disgusted when they think of raising rabbits or dogs for food. Thoughtful people look at all this and decide not to eat any animal product at all.

Under this model, everything makes perfect sense. Did I miss anything? /engineering_hubris

QuadmasterXLII · 23h ago
Only concern is its a bit tautological, when pets are valued because they’re hard to replace, but they are hard to replace (i.e. new one from the shelter doesn’t make all better) because _that_ one was valuable.
AlexCoventry · 1d ago
My read was that he's sad that people aren't using these tools to advance their own intellectual capabilities. If people are actually only using them the way he describes, to improve their shopping lists etc., I think that is a bit sad.
raincole · 23h ago
Such a weird article.

On r/AskPhysics you'll see people post AI-made crank theories every day. I assume there have been even more, as the mods constantly remove AI posts. So why would I let AI teach me physics?

AI is best at things you already know, or at least used to know. Like you know a foreign dish but you forget the exact name, or an idiom on the tip of your tongue.

zkmon · 1d ago
Nothing wrong with the situation. At some point in history, humans did not need to spend their entire time in finding food, raising kids, taking care of family and community etc. So they got into services business, selling services to each other. One kid polishes a fine pebble and exchanges it with the other kid for a nicely carved wood piece. Their elders don't see value in any of these and shout at them to go and hunt for more food. But the services thrived, outpacing the real needs of the humans. Technologies and tools evolved claiming magical abilities. Sane humans only care about their basic needs. So they just use the magical tech for the basic needs, which makes perfect sense.
gopher_space · 17h ago
Just a nit, but generally speaking hunter-gatherers have a ton of free time.
firefoxd · 1d ago
The great thing is you can use these AIs to work on whatever you feel is important to you. The bad news is that you will find their limitations.
ausbah · 5h ago
purely based off the title of this scenario happened i’d be doing the dishes every night
lijok · 1d ago
I’m pretty sure Einstein, Hawking and Tao were/are capable of reasoning.
pedrocr · 1d ago
"The best LLMs of the last 6 months spend their time fixing people's grammar"

Doesn't have quite the same ring as a lament

aeve890 · 23h ago
>This is three geniuses for the price of a gym membership

Geniuses? Come on. Let's talk when a LLM is central to a new development in HEP or math. I mean central, like a paradigm shift kind of thing, directly from the AI. A quantum gravity theory, a brand new branch of math, a new approach to a unsolved conjecture, whatever. That's what geniuses do. Not repeating what you can already read in a book! This kind of thing says more about people's ignorance and how impressionable they are than the actual capabilities of the tech. If you think that AI text and image generation _creativity_ can be translated to hard things like math, oh boy.

thr0waway001 · 23h ago
Then there’s those dudes who ask Einstein to be their girlfriend and to converse with them as such.
davydm · 1d ago
Pure delusion.

There are no "digital gods" only the super-powered autocorrect people call "ai". They can't make new stuff. They can't solve novel problems no human has solved before, though they _can_, with the correct setup, brute-force solutions to understandable problems by throwing everything at it until something sticks.

They don't learn. They don't teach. They are not the deities that are presented here. This article is fantasy, projected from real circumstances, by an over-active imagination.

trehalose · 1d ago
I'm curious to know what the author suggests we do. Elect Claude, ChatGPT, and Gemini as our leaders? Put these future cancer cure discoverers in a hospital and get them to work curing individual cancer patients?
AlexCoventry · 1d ago
I would suggest paying the $20/month rent, and trying to use ChatGPT o3/o4-mini-high/o1-pro as a tutor, to help you understand something you're curious about but never really had time or energy to dig into before. It's pretty glorious, and a straight-up pedagogical revolution, IMO.
verbify · 1d ago
How do you know it isn't hallucinating? My experience of AI is that it can be very convincing and produce information that "looks right".
AlexCoventry · 23h ago
I mostly use it in fields where its claims are easily verifiable.
aprilthird2021 · 23h ago
So then it's no different from reading a Wikipedia page about a topic? I mean, if you only use it to teach you stuff you can verify, then couldn't you skip one step in this process?
AlexCoventry · 19h ago
I learned Stochastic Differential Equations this way, for instance. I wasn't going to learn that so quickly from Wikipedia.

https://chatgpt.com/share/6834ed85-4dac-800e-b940-3b9a5f13d6...

aprilthird2021 · 18h ago
It's definitely more involved than the Wikipedia article. How do you confirm what it's saying is true?
AlexCoventry · 17h ago
I just keep asking it questions until I hit a level of detail I can verify with my own mathematical understanding.
aprilthird2021 · 16h ago
Ah okay, so it's used to summarize data about a topic in a field you understand well
AlexCoventry · 10h ago
I guess, a summary which precisely targets what I need to be made explicit, to the extent I understand that.
wizzwizz4 · 1d ago
I would suggest getting a library card, and trying to read a good book, writing down any questions you might have for later review. While it doesn't feel as though you're learning as much, or as quickly, most people will find that they actually know a lot more about the subject afterwards.

A chat log that takes me 2 hours to produce, I can read in 5 minutes. There's no world in which that's efficient pedagogy, even disregarding ChatGPT's truthfulness issues.

AlexCoventry · 23h ago
Try it with a subject where you'd have to pick up text from multiple references, like a modern deep learning paper if that's not your field. I've tried it both ways, and I know what works for me, FWIW.

The higher ChatGPT services hallucinate much less, and you can tell them to give confidence estimates on their claims. The confidence estimates are pretty reliable, in my experience.

ChatGPT is improving very swiftly. A couple of months ago I was equally dismissive.