> As insurers accurately assess risk through technical testing
If that’s not “the rest of the owl” I don’t know what is.
Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.
1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.
2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.
3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.
brdd · 4h ago
Thanks for the thoughtful response! Some replies:
1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.
2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.
3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.
janalsncm · 1h ago
What you are saying makes sense for conventional harms like non consensual deepfakes, hallucinations, Waymo running pedestrians over, etc.
However, those are a far cry from the much more severe damages that superintelligence
could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?
If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.
(I realize my explanation was wrong above, and should be the product of two numbers.)
I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.
bvan · 1h ago
Not to detract from your argument but, expected risk is the expectation of [loss x probability of said loss].
xmprt · 6h ago
This only works if there are negative consequences faced by the insured parties when things go wrong. If all the negative consequences are faced by society and there are no regulations that incur that burden on the companies building AI, then we'll have unchecked development.
brdd · 4h ago
We agree! Unchecked development could lead to disaster. Insurers can insist on adherence to best practices to incentivize safe practices. They can also clarify liability and cover most (but not all) of the risk, leaving the developer on the hook for a portion of it.
evertedsphere · 3h ago
> But we don’t want medical device manufacturers or nuclear power plant operators to move fast and break things. AI will quickly get baked into critical infrastructure and could enable dangerous misuse.
nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them
this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant
sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever
but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation
blibble · 5h ago
> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century.
I never understood this argument
as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI
and it is certainly no reason to try to do everything we possibly can to try and summon a machine god
socalgal2 · 5h ago
> I'd rather be under the Chinese boot than having all of humanity under the boot of an AI
That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
> certainly no reason to try to increase the chance of summoning a machine god
The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.
blibble · 4h ago
> The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?
> The argument is that this is inevitable.
which is just stupid
we have the agency to simply stop
and certainly the agency to not try and do it as fast as we possibly can
mattnewton · 4h ago
> we have the agency to simply stop
This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.
socalgal2 · 4h ago
"We" do not as you can not control 8 billion people
blibble · 4h ago
it's certainly not that difficult to imagine international controls on fab/DC construction, enforced by the UN security council
there's even a previous example of controls of this sort at the nation state level: those for nuclear enrichment
(the cost to perform uranium enrichment is now less than building a state of the art fab...!)
as a nation state (not facebook): you're entitled to enrich, but only under the watchful eye of the IAEA
and if you violate, then the US tends to bunker-bust you
If you financially penalize AI researchers, either with a large lump sum or in a way which scales with their expected future earnings, take you pick, and pay the proceeds to the people who put together the very cases which lead to the fines being levied, you can very effectively freeze AGI development.
If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.
There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.
socalgal2 · 4h ago
In my world there are multiple countries who each have an incentive to win this race. I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out. You're dreaming if you think you could actually get all the players to co-operate on this. It's like expecting the world to come together on climate change. It's not happening and it's not going to happen.
Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.
blibble · 4h ago
> I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out.
mossad could certainly do it
MangoToupe · 3h ago
> The options are under the boot of a Western AI or a Chinese AI.
This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.
gwintrob · 5h ago
I'm biased because my company (Newfront) is in insurance but there are a lot of great points here. This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."
There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.
blibble · 4h ago
> This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."
and by 2040 it will be $5000 trillion!
and by 2050 it will be $5000000 quadrillion!
gwintrob · 4h ago
Ha, of course. A lot easier to forecast in a spreadsheet than actually make this happen. Based on the progress in AI in the past couple years and the capabilities of the current models, would you bet against that growth curve?
blibble · 4h ago
yes, there's not $5 trillion of dumb money spare
(unless softbank has been hiding it under their mattress)
Animats · 4h ago
For this to work, large class actions are needed. If companies are liable for large judgements, companies will insure against them. If not, companies will not try to avoid harms for which they need not pay.
lowsong · 2h ago
This article is a bizarre mix of center-right economic ideas and completely unfounded assumptions about the nature of AI technology, to the point where I'm genuinely not sure if this is intended as parody or not.
> We’re navigating a tightrope as Superintelligence nears.
There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.
The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.
> If the West slows down unilaterally, China could dominate the 21st century.
Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".
> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.
This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.
> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
There is no evidence this is the case, and no citation is even attempted.
janalsncm · 1h ago
> The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction.
There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.
choeger · 4h ago
Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?
So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.
There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?
Or aren't they actually limited by the confines of what we as a species already know?
roenxi · 4h ago
You seem to be part of a trend where most humans are defined as unintelligent - there are remarkably few people out there capable of formulating novel algorithms or physics hypothesises. Novels there are a few more if we admit unreadable slop produced by people who really should choose careers other than writing. It speaks to the progress that machines have made that traditional tests of intelligence, like holding a conversation or doing well on an undergraduate-level university test, apparently no longer measure anything of importance related to intelligence.
If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.
yahoozoo · 3h ago
> Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?
no
brdd · 7h ago
The "Incentive Flywheel" of AI: how insurance unlocks secure Al progress and enables faster AI adoption.
bwfan123 · 2h ago
> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century
I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.
yahoozoo · 4h ago
With no skin in the game, it will be either cool if super intelligence happens or it doesn’t and I just get to enjoy some schadenfreude. Either all of these people are geniuses or they’re Jonestown members.
muskmusk · 6h ago
I love it!
Finally some clear thinking on a very important topic.
If that’s not “the rest of the owl” I don’t know what is.
Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.
1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.
2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.
3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.
1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.
2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.
3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.
However, those are a far cry from the much more severe damages that superintelligence could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?
If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.
(I realize my explanation was wrong above, and should be the product of two numbers.)
I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.
nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them
this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant
sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever
but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation
I never understood this argument
as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI
and it is certainly no reason to try to do everything we possibly can to try and summon a machine god
That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
> certainly no reason to try to increase the chance of summoning a machine god
The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.
given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?
> The argument is that this is inevitable.
which is just stupid
we have the agency to simply stop
and certainly the agency to not try and do it as fast as we possibly can
This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.
there's even a previous example of controls of this sort at the nation state level: those for nuclear enrichment
(the cost to perform uranium enrichment is now less than building a state of the art fab...!)
as a nation state (not facebook): you're entitled to enrich, but only under the watchful eye of the IAEA
and if you violate, then the US tends to bunker-bust you
this paper has some ideas on how it might work: https://cdn.governance.ai/International_Governance_of_Civili...
If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.
There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.
Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.
mossad could certainly do it
This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.
There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.
and by 2040 it will be $5000 trillion!
and by 2050 it will be $5000000 quadrillion!
(unless softbank has been hiding it under their mattress)
> We’re navigating a tightrope as Superintelligence nears.
There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.
The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.
> If the West slows down unilaterally, China could dominate the 21st century.
Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".
> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.
This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.
> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
There is no evidence this is the case, and no citation is even attempted.
There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.
So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.
There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?
Or aren't they actually limited by the confines of what we as a species already know?
If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.
no
I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.
Finally some clear thinking on a very important topic.