When ChatGPT Broke an Entire Field: An Oral History

37 mathgenius 7 5/1/2025, 7:51:39 AM quantamagazine.org ↗

Comments (7)

sp1nningaway · 4m ago
For me as a lay-person, the article is disjointed and kinda hard to follow. It's fascinating that all the quotes are emotional responses or about academic politics. Even now, they are suspicious of transformers and are bitter that they were wrong. No one seems happy that their field of research has been on an astonishing rocketship of progress in the last decade.
criddell · 58m ago
The field is natural language processing.
AndrewKemendo · 1h ago
If Chomsky was writing papers in 2020 his paper would’ve been “language is all you need.”

That is clearly not true and as the article points out wide scale very large forecasting models beat that hypothesis that you need an actual foundational structure for language in order to demonstrate intelligence when in fact is exactly the opposite.

I’ve never been convinced by that hypothesis if for no other reason that we can demonstrate in the real world that intelligence is possible without linguistic structure.

As we’re finding: solving the markov process iteratively is the foundation of intelligence

out of that process emerges novel state transition processes - in some cases that’s novel communication methods that have structured mapping to state encoding inside the actor

communications happen across species to various levels of fidelity but it is not the underlying mechanism of intelligence, it is an emerging behavior that allows for shared mental mapping and storage

aidenn0 · 52m ago
Some people will never be convinced that a machine demonstrates intelligence. This is because for a lot of people, intelligence exists a subjective experience that they have and the belief that others have it too is only inasmuch as others appear to be like the self.
dekhn · 2m ago
This is why I want the field to go straight to building indistinguishable agents- specifically, you should be able to video chat with an avatar that is impossible to tell from a human.

Then we can ask "if this is indistinguishable from a human, how can you be sure that anybody is intelligent?"

Personally I suspect we can make zombies that appear indistinguishable from humans (limited to video chat; making a robot that appears human to a doctor would be hard) but that don't have self-consciousness or any subjective experience.

languagehacker · 39m ago
Great seeing Ray Mooney (who I took a graduate class with) and Emily Bender (a colleague of many at the UT Linguistics Dept., and a regular visitor) sharing their honest reservations with AI and LLMs.

I try to stay as far away from this stuff as possible because when the bottom falls out, it's going to have devastating effects for everyone involved. As a former computational linguist and someone who built similar tools at reasonable scale for largeish social media organizations in the teens, I learned the hard way not to trust the efficacy of these models or their ability to get the sort of reliability that a naive user would expect from them in practical application.

philomath_mn · 31m ago
Curious what you are expecting when you say "bottom falls out". Are you expecting significant failures of large-scale systems? Or more a point where people recognize some flaw that you see in LLMs?