AI proponents keep drawing perfectly straight lines from "no AI --> LLMs exist --> LLMs write some adequate code sometimes" up into the horizon of the Y axis where AIs run all governments, write all code, paint all paintings and so on.
There's a large overlap with the crypto true-believers who were convinced after seeing "no blockchain --> blockchain exists" that all laws would be enshrined in the blockchain, all business would be done with blockchains, etc.
We've had automation in the past; it didn't decimate the labour-force; it just changed how people work.
And we didn't go from handwashing clothes --> washing machines --> all flat surfaces are cleaned daily by washing robots...
refulgentis · 8h ago
Would advise, generally, that AI isn't crypto.
It's easy to lapse into personifying it and caricaturing the-thing-in-toto, but then we end up at obvious absurdities - to wit:
- we're on HN, it'd be news to most readers that there's a "large overlap" of "true-believers", AI was a regular discussion topic here a loooong time before ChatGPT, even OpenAI. (been here since 2009)
- Similarly "AI proponents keep drawing perfectly straight lines...AIs run all governments, write all code, paint all paintings and so on."
The technical term would be "strawmen", I believe.
Or maybe begging the question (who are these true-believers who overlap? who are these AI proponents)
Either way, you're not likely to find these easy-to-knock-down caricatures on HN. Maybe some college hypebeast on Twitter. But not here.
mystified5016 · 6h ago
I have personally seen all of these people on HN.
refulgentis · 6h ago
Right - more directly, asserting they're overlapping, and then asserting all members of both sets all back the same obviously-wrong argument(s) is a recipe for dull responses from ilk like me :)
I am certain you have observed N members of each set. It's the rest that doesn't follow.
gmuslera · 8h ago
My main objection against this kind of predictions is that predictions (at least, the known enough ones) are part of the past that shape the future. Even doing a good extrapolation on the current trends, the prediction itself could make things diverge, converge or do something totally different, because the main decision makers will take it into account and that is not part of the trend. Specially with disruptive enough predictions that paints an undesirable future for all or most decision makers.
Unless it hits hard in some of the areas that we have cognitive biases and are not fully rational on the consequences.
old_man_cato · 9h ago
Sometimes I feel like I'm losing my mind with this shit.
Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?
What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?
shaldengeki · 7h ago
No, you're wrong. They wrote the story before coming up with the model!
In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.
AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.
You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?
shaldengeki · 6h ago
Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".
Here is the primary author of the timelines forecast:
> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.
> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.
Here is one staff member at Lightcone, the folks credited with the design work on the website:
> I think the actual epistemic process that happened here is something like:
> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon
> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world
> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to
> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"
This quote is kindof a killer for me: https://news.ycombinator.com/item?id=44065615 I mean if your prediction disagrees with your short-story, and you decide to just keep the story because changing the dates is too annoying, how seriously should anyone take you?
old_man_cato · 6h ago
Ok, yeah, I take the point that one illustration did not obviously precede the other but are likely the coincident result of a worldview.
I don't think it changes anything but thanks for the correction.
heavyset_go · 8h ago
The point? MIRI and friends want more donations.
old_man_cato · 8h ago
Well, yeah. Obviously.
refulgentis · 8h ago
Correct. Entirely.
And I'm yuge on LLMs.
It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.
As neutrally as possible, I think everyone can agree:
- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,
- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.
- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.
- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.
It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.
In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago
It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.
stego-tech · 9h ago
Reading through the comments, I am so glad I’m not the only one beyond done with these stupid clapbacks between boosters and doomers over a work of fiction that conveniently ignores present harms and tangible reality in knowledge domains outside of AI - like physics, biology, economics, etc.
If I didn’t know better, it’s almost like there’s a vested interest in propping these things up rather than letting them stand freely and let the “invisible hand of the free market” decide if they’re of value.
jvalencia · 10h ago
It's like the invention of the washing machine. People didn't stop doing chores, they just do it more efficiently.
Coders won't stop being, they'll just do more, compete at higher levels. The losers are the ones who won't/can't adapt.
bgwalter · 10h ago
No, all washing machines were centralized in the OpenWash company. In order to do your laundry, you needed a subscription and had to send your clothes to San Francisco and back.
vntok · 9h ago
Exactly, it wasn't the case then with washing machines and it's not the case now with AI. Your example is pretty relevant!
Today, anyone can run SOTA open-weights models in the comfort of their home for much less than the price of a ~1929 electric washing machine ($150 then or $2,800 today).
er4hn · 8h ago
That was something I struggled to understand for AI-2027. They have China nationalize DeepCent so there's only one Chinese lab. I don't understand why OpenBrain doesn't form multiple competing labs because that seems to be what happened IRL before this was written.
refulgentis · 3h ago
Because it's an excuse for a psychiatrist to wank about their political hobbyhorses, not an actual work of any effort, other than cavorting in the right circles. (i.e. we see a your question can also be framed as: "why doesn't the geopolitical fantasy masquerading as a serious whitepaper try to imitate real life, like, at all?"
jgalt212 · 9h ago
Excellent analogy
falcor84 · 10h ago
I suppose that those who stayed in the washing business and competed at a higher level are the ones running their own laundromats; are they the big winners of this technological shift?
alganet · 10h ago
What are you even talking about?
The article is not about AI replacing jobs. It doesn't even touch this subject.
fasthands9 · 9h ago
Yeah. For understandable reasons that is covered a lot too, but AI 2027 is really about the risk of self-replicating AI. Is an AI virus possible, and could it be easily stopped by humans and our military?
alganet · 9h ago
Actually, the subject has shifted from discussing any specific forecast to "really, how reliable are these forecasts?"
KaiserPro · 11h ago
bangs head against the table.
Look, fitting a single metric to a curve and projecting from that only gets you a "model" that conforms to your curve fitting.
"proper" AI, where it starts to remove 10-15% of jobs will cause an economic blood bath.
The current rate of AI expansion requires almost exponential amounts of cash injections. That cash comes from petro-dollars and advertising sales. (and the ability of investment banks to print money based on those investment) Those sources of cash require a functioning world economy.
given that the US economy is three fox news headlines away from collapse[1] exponential money supply looks a bit dicey
If you, in the space of 2 years remove 10-15% of all jobs, you will spark revolutions. This will cause loands to be called in, banks to fail and the dollar, presently run obvious dipshits, to evaporate.
This will stop investment in AI, which means no exponential growth.
Sure you can talk about universal credit, but unless something radical changes, the people who run our economies will not consent to giving away cash to the plebs.
AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.
[1] trump needs a "good" economy. If the fed, who are currently mostly independent need to raise interest rates, and fox news doesn't like it, then trump will remove it's independence. This will really raise the chance of the dollar being dumped for something else (and its either the euro or renminbi, but more likely the latter)
That'll also kill the UK because for some reason we hold ~1.2 times our GDP in US short term bonds.
TLDR: you need an exponential supply of cash for AI 2027 to even be close to working.
OgsyedIE · 10h ago
I disagree with the forecast too, but your critique is off-base. The assumption that exponential cash is required assumes that subexponential capex can't chug along gradually without the industry collapsing into mass bankruptcy. Additionally, the investment cash that the likes of Softbank are throwing away comes from private holdings like pensions and has little to nothing to do with the sovereign holdings of OPEC+ nations.
The reason that it doesn't hold water are the bottlenecks on compute production. TSMC is still the only supplier of anything useful for foundation model training and their expansions only appear big and/or fast if you read the likes of Forbes.
JimDabell · 9h ago
It’s not just changing economics that will derail the projections. The story gives them enough compute and intelligence to massively sway public opinion and elections, but then seems to just assume the world will just keep working the same way on those fronts. They think ASI will be invented, but 60% of the public will disapprove; I guess a successful PR campaign is too difficult for the “country of geniuses in a datacenter”?
gensym · 11h ago
> AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.
AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey - it's a science fiction story that pretends to be rigorous and predictive but in such a way that when you point out it's neither, the authors can fall back to "it's just a story".
At first I was surprised at how much traction this thing got, but this is the type of argument that community has been refining for decades and this point, and it's pretty effective on people who lack the antibodies for it.
mitthrowaway2 · 10h ago
I'm very much an AI doomer myself, and even I don't think AI 2027 holds water. I find myself quite confused about what its proponents (including Scott Alexander) are even expecting to get from the project, because it seems to me like the median result will be a big loss of AI-doomer credibilty in 2028 when the talking point shifts to "but it's a long tailed prediction!"
hollerith · 10h ago
Same here. I ask the reader not to react to AI 2027 by dismissing the possibility that it is quite dangerous to let the AI labs continue with their labbing.
elefanten · 10h ago
This is feeling like a retread of climate change messaging. Serious problem requiring serious thought (even without “AI doom” as the scenario, just the political economic and social disruptions suffice) but being most loudly championed via aggressive timelines and significant exaggerations.
The overreaction (on both sides) to be followed by fatigue and disinterest.
adastra22 · 9h ago
Or maybe, just maybe, AI doom isn’t a serious problem, and the lack of credible arguments for it should be evidence of such.
098799 · 9h ago
Because if we're unlucky, Scott will think in the final seconds of his life as he watches the world burn "I could have tried harder and worried less about my reputation".
mitthrowaway2 · 8h ago
I don't think it's a matter of being worried about reputation. Making credible predictions and rigorous analysis is important in all scenarios. If superintelligence really strikes in 2027, I feel like AI 2027 would be right only by coincidence, and would probably only have detracted from safety engineering efforts in the process.
heavyset_go · 8h ago
Scott will just post a ten thousand word article to deflect and his audience will reorient themselves like they always do.
mitthrowaway2 · 8h ago
You say "like they always do"; are there any previous examples of them always doing such?
stego-tech · 9h ago
It got traction because it supported everyone’s position in some way:
* Pro-safety folks could point at it and say this is why AI development should slow down or stop
* LLM-doomer folks (disclaimer: it me) can point at it and mock its pie-in-the-sky charts and milestones, as well as its handwashing of any actual issues LLMs have at present, or even just mock the persistent BS nonsense of “AI will eliminate jobs but the economy [built atop consumer spending] will grow exponentially forever so it’ll be fine” that’s so often spewed like sewage
* AI boosters and accelerationists can point to it as why we should speed ahead even faster, because you see, everyone will likely be fine in the end and you can totes trust us to slow down and behave safely at the right moment, swearsies
Good fiction always tickles the brain across multiple positions and knowledge domains, and AI 2027 was no different. It’s a parable warning about the extreme dangers of AI, but fails to mention how immediate they are (such as already being deployed to Kamikaze drones) and ultimately wraps it all up as akin to a coin toss between an American or Chinese Empire. It makes a lot of assumptions to sell its particular narrative, to serve its own agenda.
heavyset_go · 8h ago
It got traction because it hyped AI companies' products to a comical level. It's simply great marketing.
stego-tech · 8h ago
Great fiction is itself great marketing. Gotta move that merch (or in AI's case, VC funding).
tux3 · 10h ago
It's the other way around entirely: the story is the unrigorous bailey, when confronted they fall back to the actual research behind it
And you can certainly criticize the research, but you've got the motte and the bailey backwards
No comments yet
goatlover · 11h ago
It's certainly hard to imagine the political situation in the US resulting in UBI anytime soon, while at the same time the party in control wants unregulated AI development for the next decade.
bcrosby95 · 9h ago
It's the '30s with no FDR in sight. It won't end well for anyone.
pier25 · 9h ago
> AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.
One of the best things I've read all day.
f38zf5vdt · 11h ago
I think the author is right about AI only accelerating to the next frontier when AI takes over AI research. If the timelines are correct and that happens in the next few years, the widely desired job of AI researcher may not even exist by then -- it'll all be a machine-based research feedback loop where humans only hinder the process.
Every other intellectual job will presumably be gone by then too. Maybe AI will be the second great equalizer, after death.
goatlover · 11h ago
Except we have no evidence of AI being able to take over AI research anymore than we have evidence so far that automation this time will significantly reduce human labor. It's all speculation based on extrapolating what some researchers think will happen as models scale up, or what funders hope will happen as they pour more billions into the hype machine.
dinfinity · 10h ago
It's also extrapolating on what already exists. We are way beyond 'just some academic theories'.
One can argue all day about timelines, but AI has progressed from being fully inexistent to a level rivaling and surpassing quite some humans in quite some things in less than 100 years. Arguably, all the evidence we have points to AI being able to take over AI research at some point in the near future.
pier25 · 9h ago
> all the evidence we have points to AI being able to take over AI research at some point in the near future.
Does it?
That's like looking at a bicycle or car and saying "all the evidence points out we'll be able to do interstellar travel in the future".
suddenlybananas · 9h ago
>surpassing quite some humans
I don't really think this is true, unless you'd be willing to say calculators are smarter than humans (or else you're a misanthrope who would do well to actually talk to other people).
spongebobstoes · 8h ago
idk, if you try something like o3-pro, it's definitely smarter than a lot of people I know, for most definitions of "smarter"
Even the chatgpt voice mode is an okay conversation partner, and that's v1 of s2s
variance is still very high, but there is every indication that it will get better
will it surpass cutting edge researchers soon? I don't think in the next 2 years, but in the next 10 I don't feel confident one way or the other
There's a large overlap with the crypto true-believers who were convinced after seeing "no blockchain --> blockchain exists" that all laws would be enshrined in the blockchain, all business would be done with blockchains, etc.
We've had automation in the past; it didn't decimate the labour-force; it just changed how people work.
And we didn't go from handwashing clothes --> washing machines --> all flat surfaces are cleaned daily by washing robots...
It's easy to lapse into personifying it and caricaturing the-thing-in-toto, but then we end up at obvious absurdities - to wit:
- we're on HN, it'd be news to most readers that there's a "large overlap" of "true-believers", AI was a regular discussion topic here a loooong time before ChatGPT, even OpenAI. (been here since 2009)
- Similarly "AI proponents keep drawing perfectly straight lines...AIs run all governments, write all code, paint all paintings and so on."
The technical term would be "strawmen", I believe.
Or maybe begging the question (who are these true-believers who overlap? who are these AI proponents)
Either way, you're not likely to find these easy-to-knock-down caricatures on HN. Maybe some college hypebeast on Twitter. But not here.
I am certain you have observed N members of each set. It's the rest that doesn't follow.
Unless it hits hard in some of the areas that we have cognitive biases and are not fully rational on the consequences.
Am I to understand that a bunch of "experts" created a model, they surrounded the findings of that model with a fancy website, replete with charts and diagrams, that website suggests the possibility of some doomsday scenario, the headline of the website says "We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution." WILL be enormous. Not MIGHT be, they went on some of the biggest podcasts in the world talking about it, a physicist comes along and says yeah this is shoddy work, the clap back is "Well yeah it's an informed guess, not physics or anything"?
What was the point of the website if this is just some guess? What was the point of the press tour? I mean are these people literally fucking insane?
In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.
AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.
You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?
Here is the primary author of the timelines forecast:
> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.
> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.
https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...
Here is one staff member at Lightcone, the folks credited with the design work on the website:
> I think the actual epistemic process that happened here is something like:
> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon
> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world
> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to
> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"
https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...
I don't think it changes anything but thanks for the correction.
And I'm yuge on LLMs.
It is very much one of those things that makes me feel old and/or scared, because I don't believe this would have been swallowed as easily, say, 10 years ago.
As neutrally as possible, I think everyone can agree:
- There was a good but very long overview of LLMs from an ex-OpenAI employee. Good stuff, really well-written,
- Rapidly it concludes by hastily drawing a graph of "relative education level of AI" versus "year", draw a line from high school 2023 => college grad 2024 => phd 2025 => post-phd 2026 => agi 2027.
- Later, this gets published by same OpenAI guy, then the SlateStarCodex guy, and some other guy.
- You could describe it as taking the original, cut out all the boring leadup, jumped right to "AGI 2027", then wrote out a too-cute-by-half, way too long, geopolitics ramble about China vs. US.
It's mildly funny to me, in that yesteryear's contrarians are today's MSM, and yet, they face ~0 concerted criticism.
In the last comment thread on this article, someone jumped in to discuss the importance of more "experts in the field" contributing, meaning, psychiatrist Scott Siskind. The idea is writing about something makes you an expert, which leads us to tedious self-fellating like Scott's recent article letting us know LLMs don't have to have an assistant character, and how he predicted this years ago
It's not so funny, in that the next time a science research article is posted here, as is tradition, 30% will be claiming science writers never understand anything and can't write etc. etc.
If I didn’t know better, it’s almost like there’s a vested interest in propping these things up rather than letting them stand freely and let the “invisible hand of the free market” decide if they’re of value.
Coders won't stop being, they'll just do more, compete at higher levels. The losers are the ones who won't/can't adapt.
Today, anyone can run SOTA open-weights models in the comfort of their home for much less than the price of a ~1929 electric washing machine ($150 then or $2,800 today).
The article is not about AI replacing jobs. It doesn't even touch this subject.
Look, fitting a single metric to a curve and projecting from that only gets you a "model" that conforms to your curve fitting.
"proper" AI, where it starts to remove 10-15% of jobs will cause an economic blood bath.
The current rate of AI expansion requires almost exponential amounts of cash injections. That cash comes from petro-dollars and advertising sales. (and the ability of investment banks to print money based on those investment) Those sources of cash require a functioning world economy.
given that the US economy is three fox news headlines away from collapse[1] exponential money supply looks a bit dicey
If you, in the space of 2 years remove 10-15% of all jobs, you will spark revolutions. This will cause loands to be called in, banks to fail and the dollar, presently run obvious dipshits, to evaporate.
This will stop investment in AI, which means no exponential growth.
Sure you can talk about universal credit, but unless something radical changes, the people who run our economies will not consent to giving away cash to the plebs.
AI 2027 is unmitigated bullshit, but with graphs, so people think there is a science to it.
[1] trump needs a "good" economy. If the fed, who are currently mostly independent need to raise interest rates, and fox news doesn't like it, then trump will remove it's independence. This will really raise the chance of the dollar being dumped for something else (and its either the euro or renminbi, but more likely the latter)
That'll also kill the UK because for some reason we hold ~1.2 times our GDP in US short term bonds.
TLDR: you need an exponential supply of cash for AI 2027 to even be close to working.
AI 2027 is classic Rationalist/LessWrong/AI Doomer Motte-Bailey - it's a science fiction story that pretends to be rigorous and predictive but in such a way that when you point out it's neither, the authors can fall back to "it's just a story".
At first I was surprised at how much traction this thing got, but this is the type of argument that community has been refining for decades and this point, and it's pretty effective on people who lack the antibodies for it.
The overreaction (on both sides) to be followed by fatigue and disinterest.
* Pro-safety folks could point at it and say this is why AI development should slow down or stop
* LLM-doomer folks (disclaimer: it me) can point at it and mock its pie-in-the-sky charts and milestones, as well as its handwashing of any actual issues LLMs have at present, or even just mock the persistent BS nonsense of “AI will eliminate jobs but the economy [built atop consumer spending] will grow exponentially forever so it’ll be fine” that’s so often spewed like sewage
* AI boosters and accelerationists can point to it as why we should speed ahead even faster, because you see, everyone will likely be fine in the end and you can totes trust us to slow down and behave safely at the right moment, swearsies
Good fiction always tickles the brain across multiple positions and knowledge domains, and AI 2027 was no different. It’s a parable warning about the extreme dangers of AI, but fails to mention how immediate they are (such as already being deployed to Kamikaze drones) and ultimately wraps it all up as akin to a coin toss between an American or Chinese Empire. It makes a lot of assumptions to sell its particular narrative, to serve its own agenda.
And you can certainly criticize the research, but you've got the motte and the bailey backwards
No comments yet
One of the best things I've read all day.
Every other intellectual job will presumably be gone by then too. Maybe AI will be the second great equalizer, after death.
One can argue all day about timelines, but AI has progressed from being fully inexistent to a level rivaling and surpassing quite some humans in quite some things in less than 100 years. Arguably, all the evidence we have points to AI being able to take over AI research at some point in the near future.
Does it?
That's like looking at a bicycle or car and saying "all the evidence points out we'll be able to do interstellar travel in the future".
I don't really think this is true, unless you'd be willing to say calculators are smarter than humans (or else you're a misanthrope who would do well to actually talk to other people).
Even the chatgpt voice mode is an okay conversation partner, and that's v1 of s2s
variance is still very high, but there is every indication that it will get better
will it surpass cutting edge researchers soon? I don't think in the next 2 years, but in the next 10 I don't feel confident one way or the other