AI might yet follow the path of previous technological revolutions

87 mooreds 114 9/8/2025, 12:49:46 PM economist.com ↗

Comments (114)

djoldman · 6h ago
wvbdmp · 6h ago
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.

Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.

tempodox · 1h ago
We’ll manage to make our own survival on this planet less probable, even without the help of “AI”.
measurablefunc · 1h ago
If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.

¹https://gemini.google.com/share/d9b505fef250

jackphilson · 16m ago
Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.
measurablefunc · 1m ago
The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.
jcranmer · 4h ago
Well, for starters, it would make The Economist's recent article on "What if AI made the world's economic growth explode?" [1] look like the product of overly credulous suckers for AI hype.

[1] https://www.economist.com/briefing/2025/07/24/what-if-ai-mad...

jaredklewis · 2h ago
This comment reminds me of the forever present HN comments that take a form like "HN is so hypocritical. In this thread commenters are saying they love X, when just last week in a thready about Y, commenters were saying that they hated X."
kamikazeturtles · 1h ago
All articles published by the Economist are reviewed by its editorial team.

Also, the Economist publishes all articles anonymously so the individual author isn't known. As far as I know, they do this so we take all articles and opinions as the perspective of the Economist publication itself.

some_guy_nobel · 34m ago
Well, it's better the same publication publish views contradicting their past than never changing their views with new info.
gizajob · 4h ago
If you back every horse in a race, you win every time.
svara · 3h ago
I'm perfectly happy reading different, well-argued cases in a magazine even if they contradict each other.
gyomu · 2h ago
Why would you expect opinion pieces from different people to agree with one another?

I’m curious about exploring the topics “What if the war in Ukraine ends in the next 12 months” just as much as “What if the war in Ukraine keeps going for the next 10 years”, doesn’t mean I expect both to happen.

buu700 · 1h ago
To add to your point, both article titles are questions that start with "What if". The same person could have written both and there would be no contradiction.
ranger207 · 5h ago
AI being normal technology would be the expected outcome, and it would be nice if it just hurried up and happened so I could stop seeing so much spam around AI actually being something much greater than normal technology
redwood · 5h ago
I think the "calculator for words" analogy is a good one. It's imperfect since words are inherently ambiguous but then again so is certain forms of digital numbers (floating point anyone?).

Through this lens it's way more normal

sfpotter · 5h ago
Floating point numbers aren't ambiguous in the least. They behave by perfectly deterministic and reliable rules and follow a careful specification.
GMoromisato · 4h ago
So are LLMs. Under the covers they are just deterministic matmul.
Workaccount2 · 43m ago
Everything is either deterministic, random, or some combination.

We only have two states of causality, so calling something "just" deterministic doesn't mean much, especially when "just random" would be even worse.

For the record, LLMs in the normal state use both.

Chinjut · 1h ago
Ordinary floating point calculations allow for tractable reasoning about their behavior, reliable hard predictions of their behavior. At the scale used in LLMs, this is not possible; a Pachinko machine may be deterministic in theory, but not in practice. Clearly in practice, it is very difficult to reliably predict or give hard guarantees about the behavioral properties of LLMs.
mhh__ · 4h ago
And at scale you even have a "sampling" of sorts (even if the distribution is very narrow unless you've done something truly unfortunate in your FP code) via scheduling and parallelism.
Kapura · 4h ago
Digital spreadsheets (excel, etc) have done much more to change the world than so-called "artificial intelligence," and on the current trajectory it's difficult to see that changing.
micromacrofoot · 2m ago
hah, just wait until everything you ever do online is moderated through an LLM and tell me that's not world changing
thepryz · 4h ago
I don’t know if I would agree.

Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.

Swizec · 4h ago
> Spreadsheets don’t really have the ability to promote propaganda and manipulate people

May I introduce you to the magic of "KPI" and "Bonus tied to performance"?

You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.

Tarsul · 3h ago
social media ruined our brains long before LLMs. Not sure if the LLM-upgrade is is all that newsworthy... Well, for AI fake videos maybe - but it could also be that soon no one believes any video they see online which would have the adverse effect and could arguably even be considered good in our current times (difficult question!).
CuriouslyC · 1h ago
Agents are going to change everything. Once we've got a solid programmatic system driving interface and people get better about exposing non-ui handles for agents to work with programs, agents will make apps obsolete. You're going to have a device that sits by your desk and listens to you, watches your movements and tracks your eyes, and dispatches agents to do everything you ask it to do, using all the information it's taking in along with a learned model of you and your communication patterns, so it can accurately predict what you intend for it to do.

If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.

jrm4 · 29m ago
This isn't likely to happen for roughly the same reason Hypercard didn't become the universal way for novices to create apps.
CuriouslyC · 8m ago
I probably spend 80% of my time in front of a computer driving agents, challenge accepted :)
alexpotato · 53m ago
The technology for this has been around for the past 10 years but it's still not a reality, what makes AI the kicker here?

e.g. Alexa for voice, REST for talking to APIs, Zapier for inter-app connectdness.

(not trying to be cynical, just pointing out that the technology to make it happen doesn't seem to be the blocker)

CuriouslyC · 33m ago
Alexa is trash. If you have to basically hold an agent's hand through something or it either fails or does something catastrophic nobody's going to use or trust it.

REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?

doc_manhat · 4h ago
Havoc · 2h ago
A better starting point imo is that it is a general-purpose technology. It can have a profound effect on society yet not be magic/AGI.
j45 · 1h ago
Absolutely. The first version to the world was the 3rd or 4th version of ChatGPT itself.

Some can remember the difference between iPhone 1 and 4 and where it took off with the latter.

bilsbie · 5h ago
I’m guessing it will be exactly like the internet. Changes everything and changes nothing.
only-one1701 · 3m ago
lol absolutely not
only-one1701 · 3m ago
I think it’ll be like social media
marginalia_nu · 5h ago
AI is technology that does not exist yet that can be speculated about. When AI materializes into existence it becomes normal technology.

Let's not forget there has been times when if-else statements were considered AI. NLP used to be AI too.

jrm4 · 25m ago
One, I doubt your premise ever happens in a meaningfully true and visible way -- but perhaps more important, I'd say you're factually wrong in terms of "what is called AI?"

Among most people, you're thinking of things that were debatably AI, today we have things that are AI (again, not due to any concrete definition, simply due to accepted usage of the term.)

1c2adbc4 · 5h ago
Do you have a suggestion for a better name? I care more about the utility of a thing, rather than playing endless word games with AI, AGI, ASI, whatever. Call it what you will, it is what it is.
J_McQuade · 4h ago
Broadly Uneconomical Large Language Systems Holding Investors in Thrall.
OJFord · 4h ago
It will depend on the final form the normal useful tools take, but for now it's 'LLMs', 'coding agents', etc.
stillsut · 1h ago
"Unstructured data learners and generators" is probably the most salient distinction for how current system compare to previous "AI systems" examples (NLP, if-statements) that OP mentioned.
marginalia_nu · 5h ago
I don't particularly mind the term, it's a useful shibboleth separating the marketing and sci-fi from the takes grounded in reality.
bradgessler · 4h ago
Artificial Interpolator Augmented Intelligence
ronsor · 4h ago
Aye-aye, that's a good name
exe34 · 4h ago
I think it's fine to keep the name, we just have to realise it's like magic. real magic can't be done. magic that can be done is just tricks. AI that works is just tricks.
1c2adbc4 · 4h ago
I didn't realize that magic was the goal. I'm just trying to process unstructured data. Who's here looking for magic?
stillsut · 1h ago
I think the "magic" that we've found a common toolset of methods - embeddings and layers of neural networks - that seem to reveal useful patterns and relationships from a vast array of corpus of unstructured analog sensors (pictures, video, point clouds) and symbolic (text, music) and that we can combine these across modalities like CLIP.

It turns out we didn't need a specialist technique for each domain, there was a reliable method to architect a model that can learn itself, and we could already use the datasets we had, they didn't need to be generated in surveys or experiments. This might seem like magic to an AI researcher working in the 1990's.

exe34 · 57m ago
did you miss the word "like"? have you come across the concept of an analogy yet?
lo_zamoyski · 3h ago
Statistics.

A lot of this is marketing bullshit. AFAIK, even "machine learning" was a term made up by AI researchers when the AI winter hit who wanted to keep getting a piece of that sweet grant money.

And "neural network" is just a straight up rubbish name. All it does is obscure what's actually happening and leads the proles to think it has something to do with neurons.

el_nahual · 5h ago
We have a name: Large Language Models, or "Generative" AI.

It doesn't think, it doesn't reason, and it doesn't listen to instructions, but it does generate pretty good text!

chpatrick · 5h ago
[citation needed]

People constantly assert that LLMs don't think in some magic way that humans do think, when we don't even have any idea how that works.

elbasti · 17m ago
It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:

- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)

- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")

mindcrime · 3h ago
> People constantly assert that LLMs don't think in some magic way that humans do think,

It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.

And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.

hermitcrab · 1h ago
Aircraft don't fly like birds, submarines don't swim like fish and AIs aren't going to think like a human.
utyop22 · 18m ago
Do these comparisons actually make sense though?

Aircraft and submarines belong to a different category and of the same category, than AI.

chpatrick · 49m ago
Do you need to "think like a human" to think? Is it only thinking if you do it with a meat brain?
jbritton · 3h ago
I recently saw an article about LLMs and Towers of Hanoi. An LLM can write code to solve it. It can also output steps to solve it when the disk count is low like 3. It can’t give the steps when the disk count is higher. This indicates LLMs inability to reason and understand. Also see Gotham Chess and the Chatbot Championship. The Chatbots start off making good moves, but then quickly transition to making illegal moves and generally playing unbelievably poorly. They don’t understand the rules or strategy or anything.
leptons · 3h ago
Could the LLM "write code to solve it" if no human ever wrote code to solve it? Could it output "steps to solve it" if no human ever wrote about it before to have in its training data? The answer is no.
chpatrick · 2h ago
Could a human code the solution if they didn't learn to code from someone else? No. Could they do it if someone didn't tell them the rules of towers of hanoi? No.

That doesn't mean much.

Gee101 · 2h ago
It does since humans where able to invent a programming language.
chpatrick · 1h ago
Have you tried asking a modern LLM to invent a programming language?
CamperBob2 · 1h ago
Have you? If so, how'd it go? Sounds like an interesting exercise.
chpatrick · 50m ago
leptons · 2h ago
A human can learn and understand the rules, an LLM never could. LLMs have famously been incapable of beating humans in chess, a seemingly simple thing to learn, because LLMs can't learn - they just predict the next word and that isn't helpful in solving actual problems, or playing simple games.
chpatrick · 1h ago
Actually general-purpose LLMs are pretty decent at playing chess games they haven't seen before: https://maxim-saplin.github.io/llm_chess/
d3ckard · 4h ago
What can be asserted without proof, can be dismissed without proof.

The proof burden is on AI proponents.

chpatrick · 4h ago
It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.

There's this very cliched comment to any AI HN headline which is this:

"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."

or its cousin:

"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"

shakna · 27m ago
Thinking is better understood than you seem to believe.

We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.

Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".

[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...

barnacs · 3h ago
To me, it's about motivation.

Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.

In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.

chpatrick · 2h ago
Are any of those required for thinking?
barnacs · 2h ago
In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".
chpatrick · 1h ago
Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.
barnacs · 1h ago
But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".

Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".

chpatrick · 52m ago
Humans are also trained on data made by humans.

> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".

That's creativity which is a different question from thinking.

barnacs · 22m ago
I guess our definition of "thinking" is just very different.

Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.

But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.

CamperBob2 · 1h ago
The proof burden is on AI proponents.

Why? Team "Stochastic Parrot" will just move the goalposts again, as they've done many times before.

exe34 · 4h ago
my favourite game is to try to get them to be more specific - every single time they manage to exclude a whole bunch of people from being "intelligent".
leptons · 3h ago
When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.
chpatrick · 2h ago
Actually LLMs crunch through half a terabyte of weights before they "speak". How are you so confident that nothing happens in that immense amount of processing that has anything to do with thinking? Modern LLMs are also trained to have an inner dialogue before they output an answer to the user.

When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.

leptons · 2h ago
"crunch through half a terabyte of weights" isn't thinking. Following grammatical rules to produce a readable sentence isn't thought, it's statistics, and whether that sentence is factual or foolish isn't something the LLM cares about. If LLMs didn't so constantly produce garbage, I might agree with you more.
chpatrick · 1h ago
They don't follow "grammatical rules", they process inputs with an incredibly large neural net. It's like saying humans aren't really thinking because their brains are made of meat.
hermitcrab · 1h ago
>Let's not forget there has been times when if-else statements were considered AI.

They still are, as far as the marketing department is concerned.

michaeldoron · 5h ago
NLP is still AI - LLMs are using Natural Language Processing, and are considered artificial intelligence.
vhcr · 3h ago
danaris · 3h ago
They still are.

Artificial Intelligence is a whole subfield of Computer Science.

Code built of nothing but if/else statements controlling the behavior of game NPCs is AI.

A* search is AI.

NLP is AI.

ML is AI.

Computer vision models are AI.

LLMs are AI.

None of these are AGI, which is what does not yet exist.

One of the big problems underlying the current hype cycle is the overloading of this term, and the hype-men's refusal to clarify that what we have now is not the same type of thing as what Neo fights in the Matrix. (In some cases, because they have genuinely bought into the idea that it is the same thing, and in all cases because they believe they will benefit from other people believing it.)

alanbernstein · 1h ago
I think I misinterpreted your comment as not understanding the AI effect, but actually you're just summarizing it kind of concisely and sarcastically?

LLMs are one of the first technologies that makes me think the term "AI effect" needs to be updated to "AGI effect". The effect is still there, but it's undeniable that LLMs are capable of things that seem impossible with classical CS methods, so they get to retain the designation of AI.

ACCount37 · 5h ago
"AI" is a wide fucking field. And it occasionally includes systems built entirely on if-else statements.
lo_zamoyski · 3h ago
There is no difference between AI and non-AI save for the model the observer is using to view a particular bit of computation.
OkayPhysicist · 3h ago
Eh, I'd be fairly comfortable delineating between AI and other CS subfields based on the idea of higher-order algorithms. For most things, you have a problem with fixed set of fixed parameters, and you need a solution in the form of fixed solution. (e.g., 1+1=2) In software, we mostly deal with one step up from that: we solve general case problems, for a fixed set of variable parameters, and we produce algorithms that take the parameters as input and produce the desired solution (e.g., f(x,y) = x + y). The field of AI largely concerns itself with algorithms that produce models to solve entire classes of problem, that take the specific problem description itself as input (e.g., SAT solvers, artificial neural networks, etc where g("x+y") => f(x,y) = x + y ). This isn't a perfect definition of the field (it ends up catching some things like parser generators and compilers that aren't typically considered "AI"), but it does pretty fairly, IMO, represent a distinct field in CS.
empath75 · 1h ago
https://books.google.com/books?id=-fG_NOxltlEC&pg=PA25&dq=Co...

Computer's Aren't Pulling Their Weight (1991)

There were _so many_ articles in the late 80s and early 90s about how computers were a big waste of money. And again in the late 90s, about how the internet was a waste of money.

We aren't going to know the true consequences of AI until kids that are in high school now enter the work force. The vast majority of people are not capable of completely reordering how they work. Computers did not help Sally Secretary type faster in the 1980s. That doesn't mean they were a waste of money.

boredtofears · 58m ago
You mean the same kids that are currently cheating their way through their education at record rates due to the same technology? Can't say I'm optimistic.
giardini · 3h ago
How about a link that works?

Neither the OP's URL nor djoldman's archive link allow access to the article!8-((

giardini · 41m ago
OK, now djoldman's archive link above works!
westurner · 43m ago
AI is probably more of an amplifier for technological change than fire or digital computers; but IDK why we would use a different model for this technology (and teams and coping with change).

Diffusion of innovations: https://en.wikipedia.org/wiki/Diffusion_of_innovations :

> The diffusion of an innovation typically follows an S-shaped curve which often resembles a logistic function.

From https://news.ycombinator.com/item?id=42658336 :

> [ "From Comfort Zone to Performance Management" (2009) ] also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages

Transforming, Performing, Reforming, [Adjourning]

Carnal Coping Cycle: Denial, Defense, Discarding, Adaptation, and Internalization

aredox · 4h ago
The potentially "explosive" part of AI was that it could be self-improving. Using AI to improve AI, or AI improving itself in an exponential growth until it becomes super-human. This is what the "Singularity" and AI "revolution" is based on.

But in the end, despite saying AI has PhD-level intelligence, the truth is that even AI companies can't get AI to help them improve faster. Anything slower than exponential is proof that their claims aren't true.

lioeters · 54m ago
> improving itself in an exponential growth

That seems like a possibly mythical critical point, at which a phase transition will occur that makes the AI system qualitatively different from its predecessors. Exponential to the limit of infinity.

All the mad rush of companies and astronomical investments are being made to get there first, counting on this AGI to be a winner-takes-all scenario, especially if it can be harnessed to grow the company itself. The hype is even infecting governments, for economic and national interest. And maybe somewhere a mad king dreams of world domination.

utyop22 · 14m ago
What world domination though? If such a thing ever existed for example in the US, the government would move to own and control it. No firm or individual would be allowed to acquire and exercise that level of power.
utyop22 · 15m ago
Said another way, will a firm suddenly improve radically because they hired a thousand PhDs folks? Not quite.

Many things sound good on paper. But paper vs reality are very different. Things are more complex in reality.

jrm4 · 24m ago
This is brilliant and I can't believe I haven't heard this idea before.
ctoth · 5h ago
What if this paper actually took things seriously?

A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.

These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.

nottorp · 4h ago
> because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition

I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?

simonh · 5h ago
LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given.
cubefox · 1h ago
Exactly. People seem to want to underhype AI. It's like a chimpanzee saying: humans are just normal apes.

Delusional.

akomtu · 3h ago
Normal? AI is an alien technology to us, and we are being "normalized" to become compatible with it.
aeternum · 3h ago
AI actually seems far less alien than steam engines, trains, submarines, flight, and space travel.

People weren't sure if human bodies could handle moving at >50mph.

pessimizer · 4h ago
I've come to the conclusion that it is a normal, extremely useful, dramatic improvement over web 1.0. It's going to

1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.

2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.

3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.

And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.

That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.

Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.

fsloth · 42m ago
"everybody can move to a Linux desktop now"

People use whatever UI comes with their computer. I don't think that's going to change.

josefritzishere · 4h ago
If you read the paper, they make a good case that AI is just a normal technology. They're a bit dissmissive, but they're not alone in that. The AI sector has been all too much hype and far too little substance.
ktallett · 6h ago
What do they mean what if? It is similarly based to something that has existed for around 4 decades. It of course is at a higher standard of efficiency and able to search through and combine more data but it isn't new. It is just a normal technology and this was why myself and many others were shocked at the initial hype.
Eisenstein · 6h ago
> It is similarly based to something that has existed for around 4 decades.

Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?

johnbellone · 1h ago
> Four decades ago was 1985

Don't remind me.

ktallett · 5h ago
But we have ramped up slowly, it's just not been given in quite this form before. We have previously only used it in settings where accuracy is a focus.