What If A.I. Doesn't Get Better Than This?

28 sundache 9 8/13/2025, 9:44:03 AM newyorker.com ↗

Comments (9)

brainwipe · 5m ago
The title is irritating, conflating AI with LLMs. LLMs are a subset of AI. I expect future systems will be mobs of expert AI agents rather than relying on LLMs to do everything. An LLM will likely be in the mix for at least the natural language processing but I wouldn't bet the farm on them alone.
DanielHB · 1m ago
The computing power alone of all these gpus would bring a revolution in simulation software. I mean 0 AI/machine-learning, just being able to simulate much more things than we can.

Most industry-specific simulation software is REALLY crap, most from the 90s and 80s and barely evolved since then. Many stuck on single core CPUs.

qcnguy · 24m ago
> You didn’t need a bar chart to recognize that GPT-4 had leaped ahead of anything that had come before.

You did though. I remember when GPT-4 was announced, OpenAI downplayed it and Altman said the difference was subtle and wouldn't be immediately apparent. For a lot of the stuff ChatGPT was being used for the gap between 3 and 4 wasn't going to really leap out at you.

https://fortune.com/2023/03/14/openai-releases-gpt-4-improve...

In the lead up to the announcement, Altman has set the bar low by suggesting people will be disappointed and telling his Twitter followers that “we really appreciate feedback on its shortcomings.”

OpenAI described the distinction between GPT-3.5—the previous version of the technology—and GPT 4, as subtle in situations when users are having a “casual conversation” with the technology. “The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,” a research blog post read.

In the years since we got a lot more demanding of our models. Back then people were happy if they got models to write a small simple function and it worked. Now they expect models to manipulate large production codebases and get it right first time. So, the difference between GPT-3 and GPT-4 would be more apparent. But at the time, the reaction was somewhat muted.

supriyo-biswas · 14m ago
> Back then people were happy if they got models to write a small simple function and it worked. Now they expect models to manipulate large production codebases and get it right first time.

This push is mostly coming from the C-level and the hustler types, both of which need this to work out in order for their employeeless corporation fantasy to work out.

bbqfog · 2m ago
AI is so new and so powerful, that we don't really know how to use it yet. The next step is orchestration. LLMs are already powerful but they need to be scaled horizontally. "One shotting" something with a single call to an LLM should never be expected to work. That's not how the human brain works. We iterate, we collaborate with others, we reflect... We've already unlocked the hard and "mysterious" part, now we just need time to orchestrate and network it.
latexr · 4h ago
Ekshef · 30m ago
Thank you for that!
fuzzfactor · 1h ago
>What If A.I. Doesn't Get Better Than This?

What if it does?

There's a certain type of fear . . .

  "It's the fear . . . they're gonna take my job away . . . "

  It's the fear . . . I'll be working here the rest of my days . . . "
-- David Fahl

Same fear, different day.

piskov · 8m ago
Because every s-curve looks like an exponent for those in the start.

I mean look at the first plane, then first air-jets: it’s understandable to assume we would travel the galaxy in something like 2050.

Meanwhile planes are basically the same last 60 years.

LLMs are great but I firmly believe that in 2100 all is basically the same as in 2020: no free energy (fusion), no AGI.