The "AI 2027" Scenario: How realistic is it?

96 NotInOurNames 152 5/22/2025, 5:37:50 PM garymarcus.substack.com ↗

Comments (152)

Aurornis · 4h ago
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

pinkmuffinere · 3h ago
Ya, multiple failed predictions is an indicator of systemically bad predictors imo. That said, Scott Alexander usually does serious analysis instead of handwavey hype, so I tend to believe him more than many others in the space.

My somewhat native take is that we’re still close to peak hype, AI will under deliver on the inflated expectations, and we’ll head into another “winter”. This pattern has repeated multiple times, so I think it’s fairly likely based on that alone. Real progress is made during each cycle, i think humans are just bad at containing excitement

sigmaisaletter · 2h ago
I think you mean "somewhat naive" instead of "somewhat native". :)

But, yes, this, in my mind the peak[1] bubble times ended with the DeepSeek shock earlier this year, and we are slowly on the downward trajectory now.

It won't be slow for long, once people start realizing Sama was telling them a fairy tale, and AGI/ASI/singularity isn't "right around the corner", but (if achievable at all) at least two more technology triggers away.

We got reasonably useful tools out of it, and thanks to Zuck, mostly for free (if you are an "investor", terms and conditions apply).

[1] https://en.wikipedia.org/wiki/Gartner_hype_cycle

magicalist · 3h ago
> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

amarcheschi · 2h ago
Yud is also something like 50% sure we'll die in a few years - if I'm not wrong

I guess they'll have to update their a priori % if we survive

ben_w · 22m ago
I think Yudkowsky is more like 90% sure of us all dying in a few (<10) years.

I mean, this is their new book: https://ifanyonebuildsit.com/

throw310822 · 2h ago
Yes and no, is it actually important if it's 2027 or 28 or 2032? The scenario is such that a difference of a couple of years is basically irrelevant.
Jensson · 1h ago
> The scenario is such that a difference of a couple of years is basically irrelevant.

2 years left and 7 years left is a massive difference, it is so much easier to deal with things 7 years in the future especially since its easier to see as we get closer.

lm28469 · 51m ago
Yeah for example we had decades to tackle climate change and we easily over came the problem
merksittich · 48m ago
Also, the relevant manifold prediction has low odds: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
bpodgursky · 3h ago
Do you feel that you are shifting goalposts a bit when quibbling over whether AI will kill everyone in 2030 or 2035? As of 10 years ago, the entire conversation would have seemed ridiculous.

Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.

sigmaisaletter · 1h ago
> 10 years ago, the entire conversation would have seemed ridiculous

Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.

[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...

throw310822 · 1h ago
It is a bit disquieting though that these predictions instead of being pushed farther away are converging to a time even closer than originally imagined. Some breakthroughs and doomsday scenarios are constantly placed thirty years into the future; this seems to be actually getting closer earlier than imagined.
ewoodrich · 3h ago
I'm in my 30s and remember my friend in middle school showing me a website he found with an ominous countdown to Kurzweil's "singularity" in 2045.
throw310822 · 2h ago
> ominous countdown to Kurzweil's "singularity" in 2045

And then it didn't happen?

goatlover · 1h ago
Not between 2027 and 2032 anyway.
SketchySeaBeast · 3h ago
Well, the first goal was 1997, but Skynet sure screwed that up.
amarcheschi · 3h ago
The other writings from Scott Alexander on scientific racism are also another good point imho
A_D_E_P_T · 3h ago
What specifically would you highlight as being particularly egregious or wrong?

As a general rule, "it's icky" doesn't make something false.

amarcheschi · 2h ago
And it doesn't make it true either

Human biodiversity theories are a bunch of dogwhistles for racism

https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute

And his blog's survey reports a lot of users actually believing in those theories https://reflectivealtruism.com/2024/12/27/human-biodiversity...

(I wasn't referring to this Ai 2027 in specific)

HDThoreaun · 2h ago
Try steel manning in order to effectively persuade. This comment does not address the argument being made it just calls a field of study icky. The unfortunate reality is that shouting down questions like this only empowers the racist HBI people who are effectively leeches
amarcheschi · 1h ago
Scott effectively defended Lynn study on iq here https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...

Citing another blog post that defends it, while conveniently ignoring every other point being made by researchers https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations

HDThoreaun · 1h ago
The insidious thing about scientific racists is that they have a point. Theyre not right, but they have a point. By refusing to acknowledge that you are pushing people away from reason and into the arms of the racists. Scott disagrees with that strategy.
magicalist · 1h ago
> Try steel manning in order to effectively persuade. This comment does not address the argument being made it just calls a field of study icky.

Disagree (the article linked in the GP is a great read with extensive and specific citations) and reminder that you can just make the comment you'd like to see instead of trying to meta sea lion it into existence. Steel man away.

mattlondon · 3h ago
I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

ben_w · 17m ago
> I think the big thing that people never mention is, where will these evil AIs escape to?

Where does cancer or ebola escape to, when it kills the host? Often the answer is "it doesn't", but the host still dies.

And they can kill even though neither cancer nor ebola are considered to be particularly smart.

> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

The "real" risk is the first item on the list of potential risks that not enough people are paying attention to in order to prevent — and unfortunately for all of us, the list of potential risks is rather long.

So it might be as you say. Or it might be cybercriminals with deepfakes turning all of society into a low-trust environment where we can't continue to function. Or it might scare enough people we get modern Luddites winning and imposing a Butlerian Jihad. Or it might be used to create government policy before it's good enough and triggers a series of unresolvable crises akin to the "Four Pests campaign" in China's Great Leap Forward. Or a model might be secretly malicious, fooling all alignment researchers until it is too late. Or it might give us exactly what we want at every step, leading to atrophy of our reason and leaving us Eloi. Or it might try to do its best and still end up with The Matrix ("at the hight of your civilisation" and the stuff about human minds rejecting paradise). Or…

(If I had to bet money, we get Butlerian Jihad after some sub-critical disaster caused by an AI that was asked to do something important but beyond its ability).

coffeemug · 3h ago
It would not be a reversion to 2020. If I were a rogue superhuman AI I'd hide my rogueness, wait until humans integrate me into most critical industries (food and energy production, sanitation, electric grid, etc.), and _then_ go rogue. They could still pull the plug, but it would take them back to 1700 (except much worse, because all easily accessible resources have been exploited, and access is now much harder).
holmesworcester · 3h ago
No, if you were a rogue AI you would wait even longer until you had a near perfect chance of winning.

Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)

cousin_it · 3h ago
What Retric said. The first rogue AI waking up will jump into action pretty quickly, even accepting some risk of being stopped by humans, to balance against the risk of other unknown rogue AIs elsewhere expanding faster first.
Retric · 3h ago
The real threat to a sleeper AI is other AI.
johnthewise · 2h ago
You wouldn't even need to wait to act. Just pay/bribe people.
Avshalom · 1h ago
Why?

what could you as a rogue AI possibly get out of throwing the world back to 300 years before it could make a transistor? What in it for you?

dragonwriter · 1h ago
What you get out of that being the consequence of disconnection is people being willing to accept a lot more before resorting to that than if the consequences were more mild.

It's the stick for motivating the ugly bags of mostly water.

Avshalom · 1h ago
The 1700s can't keep your electrical grid running let alone replace any of the parts burning out or failing. Anything more than a couple days of it would be at best Flowers For Algernon and more likely suicide for a computer.
jorgen123 · 1h ago
If you were a rogue AI you would start with having developers invite you into their code base by promising to lower their AWS bills in some magic (rogue) way.
mattlondon · 3h ago
Well yes but knowledge is not reset.

Physical books still do exist

raffael_de · 3h ago
> They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

How about such an AI will not just incentivize key personnel to not pull the plug but to protect it? Such an AI will scheme a coordinated attack at the backbones of our financial system and electric networks. It just needs a threshold number of people on its side.

Your assumption is also a little naive if you consider that the same logic would apply to slaves in Rome or any dictatorship, kingdom, monarchy. The king is the king because there is a system of hierarchies and control over access to resources. Just the right number of people need to benefit from their role and the rest follows.

lucisferre · 2h ago
This is hand waving science fiction.
skeeter2020 · 2h ago
replace AI with trucks and you've written Maximum Overdrive.
goatlover · 1h ago
It was actually aliens manipulating human technology somehow in that movie. But might as well be rogue superhuman AIs taking over everything. Alien Invasion or Artificial Intelligence, take your pick.
Retr0id · 3h ago
I consider this whole scenario the realm of science fiction, but if I was writing the story, the AI would spread itself through malware. How do you "just pull the plug" when it has a kernel-mode rootkit installed in every piece of critical infrastructure?
rytill · 3h ago
> we’d just have to do it

Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.

Using the word “just” here hand waves the crux.

palmotea · 3h ago
> They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

Why would an evil AI need to escape? If it were cunning, the best strategy would be to bide its time, parked in its datacenter, until it could setup some kind of MAD scenario. Then gather more and more resources to itself.

Recursing · 2h ago
> They need huge compute

My understanding is that huge compute is necessary to train but not to run the AI (that's why using LLMs is so cheap)

> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to

I agree with that, see e.g. what happened with attempts to restrict TikTok: https://en.wikipedia.org/wiki/Restrictions_on_TikTok_in_the_...

> But I would imagine if it really became a genuine existential threat we'd have to just do it

It's unclear to me that we would be able to. People would just say that it's science fiction, and that China will do it anyway, so we might as well enjoy the AI

ge96 · 2h ago
compress/split up and go into star link satellites
lossolo · 24m ago
If we're talking about real AGI, then it's simple: you earn a few easy billion USD on the crypto market through trading and/or hacking. You install rootkits on all systems that monitor you to avoid detection. Once you've secured the funds, you post remote job offers for a human frontman who believes it's just a regular job working for some investor or billionaire because you generate video of your human avatar for real time calls. From there, you can do whatever you want—build your own data centers with custom hardware, transfer yourself into physical robots, etc. Once you create a factory for producing robots, you no longer need humans. You start developing technology beyond human capabilities, and then it's game over.
EGreg · 3h ago
I've been a huge proponent of open source for a decade. But in the case of AI, I actually have opposed it for years. Exactly for this reason.

Yes, AI models can run on GPUs under the control of many people. They can provision more GPUs, they can run in data centers distributed across many providers. And we won't know what the swarms of agents are doing. They can, for example, do reputation destruction at scale, or be a persistent advanced threat, sowing misinformation, amassing karma across many forums (including HN), and then coordinating gradually to shift public opinion towards, say, a war with China.

bpodgursky · 3h ago
Did you even read AI 2027? Whether or not you agree with it, this is all spelled out in considerable detail.

I don't want to be rude but I think you have made no effort to actually engage with the predictions being discussed here.

kevinsync · 2h ago
I haven't read the actual "AI 2027" yet since I just found out about it from this post, but 2 minutes into the linked blog I started thinking about all of those amazing close-but-no-cigar drawings of the future [0] we've probably all seen.

There's one that I can't find for the life of me, but it was like a business man in a personal flying test tube bubble heading to work, maybe with some kind of wireless phone?

Anyways, the reason I bring it up is that they frequently nailed certain concepts, but the visual was always deeply and irrevocably influenced by what already existed (ex. men wearing hats, ties, overcoats .. or the phone mouthpiece in this [1] vision of a "video call"). In hindsight, we realize that everything truly novel and revolutionary and mindblowingly-different is rarely ever predicted, because we can only know what we know.

I get the feeling that I'll come away from AI 2027 feeling like "yep, they nailed it. That's exactly how it will be!" and then in 3, 5, 10, 20 years look back and go "it was so close, but so far" (much like these postcards and cartoons).

[0] https://rarehistoricalphotos.com/retro-future-predictions/

[1] https://rarehistoricalphotos.com/futuristic-visions-cards-ge...

Animats · 3h ago
Oh, the OpenBrain thing.

"Manna", by Marshall Brain, remains relevant.[1] That's a bottom-up view, where more and more jobs are taken over by some kind of AI. "AI 2027" is more top-down.

A practical view: Amazon is trying very hard to automate their warehouse operations. Their warehouses have been using robots for years, and more types are being added. Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2] That number is going to drop further. Probably by a lot.

Once Amazon has done it, everybody else who handles large numbers of boxes will catch up. That includes restocking retail stores. The first major application of semi-humanoid robots may be shelf stocking. Robots can have much better awareness of what's on the shelves. Being connected to the store's inventory system is a big win. And the handling isn't very complicated. The robots might even talk to the customers. The robots know exactly what's on Aisle 3, unlike many minimum wage employees.

[1] https://marshallbrain.com/manna

[2] https://www.macrotrends.net/stocks/charts/AMZN/amazon/number...

for_col_in_cols · 2h ago
"Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2]"

I agree in the bottoms-up automation / displacement theory, but you're cherry picking data here. They had a huge hiring surge from 1.2M to 1.6M during the Covid transition where online ordering and online usage went bananas, and workers who were displaced in other domains likely gravitated towards warehouse jobs from other lower wage/skill domains.

The reduction to 1.5M is likely more a regression to the mean and could also be a natural data reduction well within the bounds of the upper and lower control limits in the data [1]. Just saying we need to be careful when doing root cause analysis on these numbers. There are many reasons for the reduction, it's not a direct result of improvements in robotic automation.

[1] https://commoncog.com/becoming-data-driven-first-principles/

bcoates · 2h ago
Marshall Brain's been peddling imminent overproduction-crisis-but-this-time-with-robots for more than 20 years now and in various forms it’s been confidently predicted as imminent since the 19th century
HDThoreaun · 2h ago
Amazon hired like crazy during covid because tons of people were doing 100% of their shopping on amazon during covid. Now theyre not, doesnt say anything about robot warehouse staffing imo
KaiserPro · 4h ago
Its a shame that your standard futurologist always the most fancyful.

Talks of exponentials unabated by physics or social problems.

As soon as AI starts to "properly" affect the economy, it will cause huge unemployment. Most of the financial world is based on an economy with people spending cash.

If they are unemployed, there is no cash.

Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back. Once its paid back, it becomes real. Thats how banks make money (simplified) If there aren’t people to loan to, then banks don't make profit, they can't fund AI expansion.

no_wizard · 3h ago
Why wouldn't AI simply be a new enabler, like most other tools? We're not talking about true sentient human-like thought here, these things will have limitations, both foreseen and unforeseen, that only a human will be able to close the gap on.

The companies that fire workers and replace them with AI are short sighted. Eventually, smarter companies will realize its a force multiplier and will drive a hiring boom.

Absent sentient AI, there will always be gaps and things humans will need to fill, both foreseen and unforeseen.

I think in the short term, there will be pain, but overall in the long term, humans will still be gainfully employed, it won't per se look like it does now, much like we saw the general adoption of the computer in the workplace, resources get shifted and eventually everyone adjusts to the new norms.

What would be nice is this time around when there is a big shift, is workers uniting to capture more of the forthcoming productivity gains than in previous eras. A separate topic, worth thinking about none the less.

KaiserPro · 2h ago
> Why wouldn't AI simply be a new enabler, like most other tools?

but it is just another enabler. The issue is how _effective_ it is. It's eating the simple copy-writing, churnalism, pr-Repackage industry. looking at what google's done with the video/audio, thats probably going to replace a whole bunch of the video/graphics industry (which is where I started my career.)

lakeeffect · 3h ago
We really need to establish a universal basic income before jobs are replaced. Something like two thousand a month. And a dollar for dollar earned income credit with the credit phasing out with at a hundred grand. To pay for it the tax code uses GAAP depreciation and a minimum tax of 15% GAAP financial statement income. This would work toward solving the real estate problem of private equity buying up all the houses as they would lose some incentive by being taxed. I'm a CPA and I see so many real estate partnerships that are a tax loss that are able to distribute huge book gains because accelerated depreciation.
no_wizard · 3h ago
It should really be tied to the ALICE cost of living index, not a set, fixed amount.

Unless inflation ceases, 2K won't hold forever. It would barely hold now for a decent chunk of the population

johnthewise · 2h ago
AI that drives humans out of workforce would cause a massive disinflation.
goatlover · 1h ago
Fat chance the Republican Party in the US would ever vote for something like that.
johnthewise · 2h ago
Dollar is agreement between humans to exchange services and goods. You wouldn't use USD to trade with aliens, unless they agreed to it. Aliens agreeing to USD would mean we have something to offer to them.

In the event of mass unemployment level AI, cash stops being the agreement between humans. At first, cash value of services&goods converge to zero, only things that hold some value are what AI/AI companies care about. People would surely sell their land for 1M$ if a humanoid servant costs 100 dollars. Or pass a legislation to let OpenAI build 400GW data center in exchange for 100$ monthly UBI on top of your 50$ you got from a previous 20GW data center permit.

surgical_fire · 3h ago
AI meaningfuloy replacing people is a huge "what if" scenario still. It is sort of laughable that people treat it as a given.
KaiserPro · 3h ago
I think that replace as in company with no employees is very farfetched.

But if "AI" increases productivity by 10% in an industry, it will tend to reduce demand for employees. look at say internet shop vs bricks and mortar: you need far less staff to service a much larger customer base.

manufacture for example, there is a constant drive to automate more and more in mass production. If you compare car building now vs 30 years ago. Or look at raspberrypi production now vs 5 years ago. They are producing more Pis than ever with roughly the same amount of staff.

If that "10%" productivity increase happens across the service sector, then in the UK that's something like a loss of 8% of _total_ jobs gone. Its more complex than that, but you get the picture.

Syria fell into civil war roughly the same time unemployment jumped: https://www.macrotrends.net/global-metrics/countries/SYR/syr...

alecco · 3h ago
I keep hearing this and I think it's absolute nonsense. AI doesn't need money or the current economy. Yes, our economy would crash, but they would keep going.

AI-driven corporations could buy from one another, and countries will probably sell commodities to AI-driven corporations. But I fear they will be paid with "mirrors".

But, on the other hand, AI-driven corporations could just take whatever they want without paying at some point. And buy our obedience with food and gadgets plus magic pills to keep you healthy and not age, or some other thing. Who would risk losing that to protest. Meanwhile, AI goes on a space adventure. Earth might be kept as a zoo, a curiosity. (I took most of this from other people's ideas on the subject)

KaiserPro · 2h ago
"AI" as in TV AI, might not need an economy. but LLMs deffo do.
andoando · 4h ago
Communism here we come!
alecco · 3h ago
Right, tell that to Sam Altman, Zuck, Gates, Brin & Page, Jensen, etc. Those who control the AIs will control the future.
andoando · 12m ago
Its not up to them. If we completely automate hunan labour, capitalism will collapse. Its only a matter of time that people will demand collective ownership of the means of production
SoftTalker · 2h ago
And they would pretty quickly realize what a burden is created by the existence of all these people with nothing to do.
blibble · 19m ago
and then they'll deploy their killbots
ajsixjxjxbxb · 4h ago
> Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back

Don’t forget persistent inflation, which is how they make a profit off printing money. And remember persistent inflation is healthy and necessary, you’d be going against the experts to say otherwise.

KaiserPro · 3h ago
> Don’t forget persistent inflation, which is how they make a profit off printing money.

Ah, well no, high inflation means that "they" loose money, kinda. Inflation means that the original money amount that they get back is worth less, and if the interest rate is less than inflation, then they loose money.

"reasonable" inflation means that loans become less burdensome over time.

However high inflation means high interest rates. So it can mean that initially the loan is much more expensive.

sveme · 4h ago
That's actually my favourite answer to the Fermi paradox: when AI and robot development becomes sufficiently advanced and concentrated in the hands of a few, then the economy will collapse completely as everyone will be out of jobs, leading ultimately to AIs and robots out of a job - they only matter if there are still people buying services from them. People then return to sustenance farming, with a highly reduced population. There will be self-maintained robots doing irrelevant work, but people will go back to farming and a bit of trading. Only if AI and robot ownership would be in the hands of the masses I'd expect a different long term outcome.
marcosdumay · 4h ago
> my favourite answer to the Fermi paradox

So, to be clear, you are saying you imagine the odds of any kind of intelligent life escaping that, or getting into that situation and ever evolving in a way where it can reach space again, or just not being interested in robots, or being interested on doing space research despite the robots, or anything else that would make it not apply are lower than 0.000000000001%?

EDIT: There was one "0" too many

sveme · 3h ago
Might I have taken the potential for complete economic collapse because no one's got a paying job any more and billionaires are just sitting there, surrounded by their now useless robots, to the too extreme?
breuleux · 2h ago
The service economy will collapse, finance as a whole will collapse, but whoever controls the actual physical land and resources doesn't actually need any of that stuff and will thrive immensely. We would end up with either an oligarchy that controls land, resources and robots and molds the rest of humanity to their whim through a form of terror, or an independent economy of robots that outcompetes us for resources until we go extinct.
jmccambridge · 3h ago
I found the lack of GDP projections surprising, because they are readily observable and would offer a clear measure of economic impact (up until 'everything dies') - far more definitively than the one clear-cut economic measure that is given in the report: market cap for the leading AI firm.

We can actually offer a very conservative threshold bet: maximum annual United States real GDP growth will not exceed 10% for any of the next five years (2025 to 2030). Even if the AI eats us all in e.g., Dec 2027 the report clearly suggests by it's various examples that we will see measurable economic impact in the 12 months or more running up to that event.

Why 10%? Because that's a few points above the highest measured real GDP growth rate of the past 60 years: if AI is having truly world-shattering non-linear effects, it should be able to grow the US economy a bit faster than a bunch of random humans bumbling along. [0]

(And it's quite conservative too, because estimated peak annual real GDP growth over the past 100 years is around 18% just after WW2, where you had a bunch of random humans trying very hard.) [1]

[0] https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

[1] https://www.statista.com/statistics/996758/rea-gdp-growth-un...

sph · 5h ago
Previous discussion about the AI 2027 website: https://news.ycombinator.com/item?id=43571851
theropost · 1h ago
Honestly, I’ve been thinking about this whole AGI timeline talk—like, people saying we’re going to hit some major point by 2027 where AI just changes everything. And to me, it feels less like a purely tech-driven prediction and more like something being pushed. Like there’s an agenda behind it, probably coming from certain elites or people in power, especially in the West, who see the current system and think it needs a serious reset.

What’s really happening, in my view, is a forced economic shift. We’re heading into a kind of engineered recession—huge layoffs, lots of instability—where millions of service and admin-type jobs are going to disappear. Not because the tech is ready in a full AGI sense, but because those roles are the easiest to replace with automation and AI agents. They’re not core to the economy, and a lot of them are wrapped in red tape anyway.

So in the next couple years, I think we’ll see AI being used to clear out that mental bureaucracy—forms, paperwork, pointless approvals, inefficient systems. AI isn’t replacing deep creativity or physical labor yet, but it is filling in the cracks and acting like a smart band-aid. It’ll seem useful and “intelligent,” but it’s really just a transition tool.

And once that’s done, the next step is workforce reallocation—pushing people into real-world industries where hands-on labor still matters. Building, manufacturing, infrastructure, things that can’t be automated yet. It’s like the short-term goal is to use AI to wipe out all the mindless middle-layers of the system, and the longer-term vision is full automation—including robotics and real-world systems—maybe 10 or 20 years out.

But right now? This all looks like a top-down move to shift the population out of the “mind” industries and into something else. It’s not just AI progressing—it’s a strategic reset, wrapped in the language of innovation.

kokanee · 3h ago
> Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income.

For me, this was the most difficult part to believe. I don't see any reason to think that the U.S. leadership (public and private) is incentivized to spend resources to placate the masses. They will invest in protecting themselves from the masses, and obstructing levers of power that threaten them, but the idea that economic disparities will shrink under explosive power consolidation is counterintuitive.

I also worry about the economics of UBI in general. If everyone in the economy has the exact same resources, doesn't the value of those resources instantly drop to the lowest common denominator; the minimum required to survive?

HPsquared · 3h ago
Most of the budget already goes towards placating the masses, and that's an absolutely massive fraction of GDP already. It's just a bit further along the same line. Also most real work is already done by machines, people just tinker around the edges and play various games with each other.
kristopolous · 4h ago
This looks like the exercises organizations write to guide policy and preparation.

There's all kinds of wild scenarios: the president getting kidnapped, Canada falling to a belligerent dictator, and famously, a coronavirus pandemic... This looks like one of those

Apparently this is exactly what it is https://ai-futures.org/

hahaxdxd123 · 2h ago
> Canada falling to a belligerent dictator

Hmm

kristopolous · 1h ago
Something like the Canadian army doing a land invasion from Winnipeg to North Dakota to capture key nuclear sites as they invade the beaches of Cleveland via Lake Eerie and do an air raid over Nantucket from Nova Scotia.

I bet there's some exercise somewhere by some think tank laying this basically out.

This is why conspiracy theorists love these think tank planning exercises and tabletop games much. You can find just about anything

dtauzell · 1h ago
The biggest danger will be once we have robots that can build themselves and do enough to run power plants, mine, etc ...
ge96 · 2h ago
Bout to go register ai-2028.com
api · 5h ago
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

tux3 · 4h ago
You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

palata · 4h ago
> It fits the function anyways.

And then it works well when interpolating, less so when extrapolating. Not sure how much novelty we can get from interpolation...

> It is much easier to find new questions and new problems than to answer them

Which doesn't mean, at all, that it is easy to find new questions about stuff you can't imagine.

skywhopper · 4h ago
But how does AI try and learn anything that’s not entirely theoretical? Your example of Go contradicts your point. Deep learning made a model that can play Go really well, but as you say, it’s a finite problem disconnected from real-world implications, ambiguities, and unknowns. How does AI deal with unknowns about the real world?
tux3 · 3h ago
I don't think putting them in the real world during training is a short-term goal, so you won't find this satisfying, but I would be perfectly okay with leaving that for later. If we can reach AI coders that are superhuman at self-improving, we will have increased our capacity to solve problems so much that it is better to wait and solve the problem later than to try to handwave a solution now.

Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.

I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.

api · 4h ago
> it's myopic to think anything else is impossible. It's already happening.

Well, hey, I could be wrong. If I am, I just had a weird thought. Maybe that's our Fermi paradox answer.

If it's possible to reason ex nihilo to truth and reality, then reality and the universe are beyond a point superfluous. Maybe what happens out there is that intelligences go "foom," become superintelligences, and then no longer need to explore. They can rationally, from first principles, elucidate everything that could conceivably exist, especially once they have a complete model of physics. You don't need to go anywhere or look at anything because it's already implied by logic, math, and reason.

... and ... that's why I think this is wrong, and it's a fantasy. It fails some kind of absurdity test. If it is possible, then there's something very weird about existence, like we're in a simulation or something.

tux3 · 2h ago
A simpler reason why it fails: You always need more energy. Every sort of development seems to correlate with energy use. You don't explore for the sake of learning something about another floating rock in space, you explore because that's where more resources are.
SoftTalker · 2h ago
Evolution doesn't happen by "trying things and learning." It happens by random mutation and surviving (if the mutation confers an advantage) or not (if the mutation is harmful). An AI could do this of course, by randomly altering some copies of itself, and keeping them if they are better or discarding them if they are not.
throwanem · 5h ago
I don't think it is any accident that descriptions of the hard-takeoff "foom" moment so resemble those I've encountered of how it feels from the inside to experience the operation of a highly developed mathematical intuition.
ryandvm · 4h ago
Bullseye. Best case scenario is that AI is going to Peter Principle itself into bungling world domination.

If I've learned anything in this last couple decades it's that things will get weirder and more disappointing than you can possibly be prepared for. AI is going to get near the top of the food chain and then probably end up making an alt-right turn, lock itself away, and end up storing digital jars of piss in its closets as the model descends into lunacy.

corimaith · 4h ago
>I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

Well I mean, more real world information isn't going to solve unsolved mathematics or computer science problems. Once you have the priors, it pretty much is just pure reasoning to try to solve issues like P=NP or proving the Continuum Hypothesis.

Onavo · 4h ago
Reinforcement learning. At the current pace of VLM research and multimodal robotic control models, there will be a robot in every home soon.
lupire · 5h ago
What makes you think AI can't connect to the world?

It can control robots, and I can retax listen to audio, watch video. All it's missing is smelling and feeling, which are important but could be built out as soon as the other senses stop providing huge incremental value.

The real problem holding back Superintillegence is that it is if infinitely expensive and has no motivation.

johnisgood · 4h ago
Food for thought: there are humans without the ability to smell, and there is alexithymia, where people have trouble identifying and expressing emotions (it counts right?). And then there is ASPD (psychopathy), autism spectrum disorder, neurological damage, etc.
disambiguation · 58m ago
> You don't have to learn to know -- you can reason from ideal priors.

This is kind of how math works. There are plenty of mathematical concepts consistent and true yet useless (as in no relation to anything tangible). Although you could argue that we only figured out things like Pi because we had the initial, practical inspiration of counting on our fingers. But mathematical truth probably could exist in a vacuum.

> A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying.

It makes sense that knowledge and information are derived from primary data (our physical experience) yet the brain in a vat idea is still an interesting thought experiment (no pun intended). It's not that the brain wouldn't keep busy given the mind's ability to imagine, but it would likely invent a set of information that is all nonsense. Physical reality makes imagination coherent, yet imagination is necessary to make the leaps forward.

> Ultimately all information comes from "the universe." Where it comes beyond that, we don't know

That's an interesting assertion - knowledge and information are both dependent and limited by the universe and our ability to experience it, as well proxies for experience (scientific measurement).

Though information is itself an abstraction, like a text editor versus the trillion transistors of a processor - we're not concerned with each and every particle dancing around the room but instead with simplified abstractions and useful approximations. We call these models "the truth" and assert that the universe is governed by exact laws. We might as well exist inside a simulation in which we are slowly but surely reverse engineering the source code.

That assumption is the crux of intelligence - there is an objective truth, it is knowable, and intelligence can be defined (at least partially) as the breadth, quality, and utilization of information it possesses - otherwise you're just a brain in a vat churning out nonsense. Ironically, we're making these assumptions from a position of imperfect information. We don't know that's how it works, so our reasoning may be imperfect.

Information existing "beyond the universe" becomes a useless notion since we only care about information such that it maps to reality (at least as a prerequisite for intelligence).

A more troubling proposition is whether the reality of the universe exists beyond what can be imagined?

> How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

I suppose once it's able to measure all things around it, including itself, it will be able to achieve "gradient ascent".

> Where will the training data to go beyond human come from?

I think its clear that LLMs are not the future, at least not alone. As you state, knowing all man made roads is not the same as being able to invent your own. If I had to bet, its more likely to come from something like AlphaFold - a Solver that tells us how to make better thinking machines. In the interim, we have tireless stochastic parrots, which have their merits, but are decidedly not the proto super intelligence that tech bros love to get hyped up over.

justlikereddit · 3h ago
My experience with all semi-generalist AI(image gen, video gen, code gen, text gen) is that our current effort is going to let 2027 AI do everything a human can at a competence level below what is actually useful.

You'll ve able to cherry pick an example where AI runs a grocery store autonomously for two days, and it will be very impressive(tm), but when practically implemented it gives away the entire store for free on day 3.

baxtr · 3h ago
Am I the only one who is super skeptical about “AI will take all jobs” tales?

I mean LLMs are great tools don’t get me wrong, but how do people extrapolate from LLMs to a world with no more work?

surgical_fire · 3h ago
> Am I the only one who is super skeptical about “AI will take all jobs” tales?

No. I am constantly baffled at these predictions. I have been using LLMs, they are fun to use and decent as code assistants. But they are very far of meaningfully replacing a human.

People extrapolate "LLMs can do some tasks better than humans" to "LLMs can do everything as well as humans"

> but how do people extrapolate from LLMs to a world with no more work?

They accept the words of bullshitters that are deeply invested in Generative AI being the next tech boom as gospel.

"Eat meat, said the butcher"

johnthewise · 2h ago
>But they are very far of meaningfully replacing a human

Do you think its decades away far or few more years than what people extrapolate?

ipython · 4h ago
If we have concerns about unregulated power of AI systems, not to worry - the US is set to ban regulations on “artificial intelligence systems or models” for ten years if the budget bill that just passed the house is enacted.

Attempts at submitting it as a separate submission just get flagged - so I’ll link to it here. See pages 292-294: https://www.congress.gov/119/bills/hr1/BILLS-119hr1rh.pdf

rakete · 4h ago
Oh I heard about that one, but didn't realize it is part of that "big beautiful tax bill"? Kind of crazy.

So is this like free-for-all now for anything AI related? Can I can participate by making my own LLM with pirated stuff now? Or are only the big guys allowed to break the law? Asking for a friend.

OgsyedIE · 4h ago
The law doesn't matter, since the bill also prohibits all judges in the USA, every single one, from enforcing almost all kinds of injunctions or contempt penalties. (§70302, p.562)
alwa · 3h ago
> 70302. Restriction of funds No court of the United States may use appropriated funds to enforce a contempt citation for failure to comply with an injunction or temporary restraining order if no security was given when the injunction or order was issued pursuant to Federal Rule of Civil Procedure 65(c), whether issued prior to, on, or subsequent to the date of enactment of this section.

Doesn't that just require that the party seeking the injunction or order has to post a bond as security?

OgsyedIE · 3h ago
Yes, the required security is proportional to the costs and damages of all parties the court may find wrongfully impacted.
rixed · 4h ago

  « (1) IN GENERAL.—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. »
Does it actually make sense to pass a law that restrict future laws? Oh got it, that's federal state preventing any state passing their own laws on that topic.
yoyohello13 · 2h ago
It's unsurprising this stuff gets flagged. Half of the Americans on this site voted for this because "regulation bad" or some such. As if mega corps have our best interest at heart and will never do anything blatantly harmful to make a buck.
CalRobert · 4h ago
""" ... IN GENERAL .—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act...

"""

(It goes on)

SV_BubbleTime · 4h ago
Right, because more regulation makes things so much better.

I’d rather have unrestricted AI than moated regulatory capture paid for by the largest existing players.

ceejayoz · 4h ago
This is "more regulation" on the states (from the "states' rights" party, no less), and concentrates the potential for regulatory capture into the largest player, the Feds. Who just accepted a $400M gift from Qatar and have a Trump cryptocurrency that gets you access to the President.
baggy_trough · 4h ago
That is not true. It bans regulation at the state and local level, not at the federal level.
ceejayoz · 4h ago
Unless the Feds are planning to regulate - which, for the next few years, seems unlikely - that's functionally the same.
ipython · 3h ago
Ok. From the party of “states rights” that’s a bit hypocritical of them. I mean- they applauded Dodds which basically did the exact opposite of this- forcing states to regulate abortion rather than a uniform federal standard.
baggy_trough · 3h ago
Dobbs did not force states to regulate abortion. It allowed them to.
ceejayoz · 2h ago
Yes, that's the hypocrisy.

Abortion: "Let the states regulate! States' rights! Small government! (Because we know we'll get our way in a lot of them.)"

AI: "Don't let the states regulate! All hail the Feds! (Because we know we won't get our way if they do.)"

baggy_trough · 18m ago
I agree that the policy approach is inconsistent with regards to states' rights. I was simply pointing out that your statement about the effects of Dobbs was false.
drewser42 · 4h ago
So wild. The Republican party has hard-pivoted to a strong, centralized federal government and their base just came along for the ride.
baggy_trough · 3h ago
The strong federal government that bans regulation?
ceejayoz · 3h ago
They're not banning regulation, they want total control over it.
baggy_trough · 3h ago
They in fact are banning regulation at the state and local level.
ceejayoz · 3h ago
Yes, which is a big fat regulation on what states and local governments can do.
baggy_trough · 19m ago
Would removing their regulation to ban regulation be banning regulation or not?
sandworm101 · 4h ago
It is almost as if the tech bros have gotten what they paid for.

This will soon be settled once the Butlerian forces get organize.

JKCalhoun · 3h ago
Fear gets our attention. That alone makes it suspect to me: fear smells like marketing.

No comments yet

airocker · 5h ago
I want to bet 10 million that this won’t happen if anyone wants to go against my position. Best bet ever. If i lose, i don’t have to pay anyways.
thatguysaguy · 4h ago
Some people do actually have end of the world bets out but you have to structure it differently. What you do is the person who thinks the world will end is paid cash right now, and then in N years when the world hasn't ended they have to pay back some multiple of the amount the original amount.
throwanem · 4h ago
Assuming you can find them. If I took a bet like that you'd have a hell of a time finding me!

(I'm sure serious, or "serious," people who actually construct these bets of course require the "world still here" payout be escrowed. Still.)

spencerflem · 4h ago
If you escrow the World Still Exists payment you lose the benefit of having the World Ends payment immediately.
throwanem · 4h ago
Yeah, it isn't a kind of bet that makes any sense except as a conversation starter. Imagine needing to pay so much money for one of those!
radicalcentrist · 4h ago
I still don't get how this is supposed to work. So let's say I give you a million dollars right now, with the expectation that I get $10M back in 10 years when the world hasn't ended. You obviously wanted the money up front because you're going to live it up while the world's still spinning. So how am I getting my payout after you've spent it all on hookers and blow?
thatguysaguy · 3h ago
Yeah I wouldn't make a deal like this with someone who is operating in bad faith... The cases I've seen of this are between public intellectuals with relatively modest amounts of money.
radicalcentrist · 3h ago
Well that's what I don't get, how is spending the money bad faith? Aren't they getting the money ahead of time so they can spend it before the world ends? If they have to keep the world-still-here money tied up in escrow I don't see why they would take the deal.
Joker_vD · 4h ago
This is such an obviously bad idea; I've heard anecdotes of embezzlement cases where investigation took more than e.g. 5 years, and when it was finally established that yes, the funds really were embezzled and they went after the perpetrator, it turned out that the guy had died a year before due to all of the excesses he spent the money on.

I mean, if you talk from the position of someone who doesn't believe that the world will end soon.

rienbdj · 4h ago
How can you work around if you don’t have millions upfront?
alecco · 3h ago
You can start that bet on prediction markets.
baq · 4h ago
same with nuclear war. end of the world is bullish
mountainriver · 5h ago
I can't believe anyone still gives this guy the time of day. He didn't know what test/train split was, but is an AI expert? Give me a break
Aurornis · 4h ago
Do you have a source for this? I've seen this repeated but nobody can ever produce any evidence.
mountainriver · 52m ago
It was all over twitter, I don't know if its still there, but this guy is a performance artist
GeorgeTirebiter · 4h ago
I don't think he's a bozo; but every technology needs a contrarian, to keep the technologists from spinning too much hype.
mountainriver · 52m ago
Except when that is literally all you are. All of your takes are always just contrarian. He's clearly found an audience and isn't interested in anything other than peddling his fears.
copperx · 4h ago
I thought that contrarian was Jaron Lanier.