US AI Action Plan

413 joelburget 605 7/23/2025, 3:28:58 PM ai.gov ↗

Comments (605)

softwaredoug · 22h ago
Obviously AI is a massive and important area for economic growth. But so is clean energy. And both right now are at an inflection point.

It seems the US is going to thrive with the former but naively stick our heads in the sands with the latter.

We’ll cede economic leadership, and wonder in 20 years what happened as other countries lead in energy. Even worse, the administrations stance will encourage US energy companies to pursue bad strategies, letting them avoid transforming their business. In 10-20 years they'll be bankrupt and the US will probably have to bail them out for strategic reasons.

taurath · 17h ago
The US is not naively sticking our heads in the sand, our leadership is making direct choices to make sure that they rule over the ashes rather than let a future happen where they have less power.
Lonestar1440 · 20h ago
Overall US Energy production has been expanding, faster, each recent year. https://www.eia.gov/energyexplained/us-energy-facts/. This is all before you factor in the recent attention to Nuclear, which could come online within the next decade.

The ice caps may be worse off for it, but there's little reason to think the USA will cease to "lead in energy" anytime soon.

margalabargala · 20h ago
The US has long since exhausted it's "easy" oil/gas reserves. Yes, there's tons more down there, but it's increasingly hard to get to. Lots of extraction methods only make sense when the price for oil is above some amount.

If the rest of the world standardizes on solar+battery, demand for oil goes down, and so will the price. Which in turn makes US-produced oil not cost effective to extract, and domestic energy production collapses in favor of cheap foreign imports.

And then we're worse off in several different ways.

axpy906 · 20h ago
This probably a stupid question but do solar and batteries depend on rare earth metals and their supply?
ted_dunning · 17h ago
The answer depends on the kind of battery chemistry and how literally you mean "rare earth". If you take some slack on the definition and just mean "metal stuff in limited supply", then many battery chemistries have limited supplies.

There are, however, some chemistries with really nice supply chains. The Iron Redox Flow Battery (IRFB) really only needs iron and iron chloride as reactants. Those batteries are being commercialized, but they aren't common (yet?).

sroussey · 19h ago
The quick answer is yes, today. But there are battery technologies that require less and less in development.

Also, rare earth elements are not that rare. But they are not concentrated, and finding concentrations of them is kinda rare. Event then, you have to mine a lot of area to get them, which is not great for the environment. And since Americans (and everyone ex-China) has not been doing it for decades, only China has advanced the technology to extract and refine it for decades.

This lack of refining is similar to our lack of working on solar which will but us behind potentially forever, or until there is a big enough disruption to overcome the decade of experience. You can look at chipmaking and see that such things are not easy.

Lonestar1440 · 20h ago
There are a great many assumptions in this argument, and I'm not sure they stand up well to examination.

1) "We're out of easily extractable oil" maybe, but I've heard it before and technology does have a way of marching forward.

2) "Rest of world's oil demand will drop" is possible but certainly not happening today and far from certain.

3) "Then Oil prices will plummet in the US Domestic market" is far from a sure thing even if 2) comes to pass. How do the other producers - who don't have large domestic markets! - react? What happens to global petrochemical demand? And what sort of Industrial policy could shield our markets, even if this happens globally?

At the end of the day, we have a continent full of oil (and Uranium! which I prefer!) and an energy-hungry population.

margalabargala · 18h ago
> 1) "We're out of easily extractable oil" maybe, but I've heard it before and technology does have a way of marching forward.

You've heard it before because it's been true for a long time. Technology marches forwards, yes, but technology is expensive, and like I said, a lot of domestic production has fairly high price levels below which they will not operate.

> 2) "Rest of world's oil demand will drop" is possible but certainly not happening today and far from certain.

That's totally fair.

> 3) "Then Oil prices will plummet in the US Domestic market" is far from a sure thing even if 2) comes to pass. How do the other producers - who don't have large domestic markets! - react? What happens to global petrochemical demand? And what sort of Industrial policy could shield our markets, even if this happens globally?

Assuming (2) does happen, then I think this follows naturally. The cost to produce a barrel of oil varies wildly by country. If global demand drops, then the cheapest producers eat the market that they currently cannot fully supply.

Could industrial policy shield this? Sure, but at great cost to the US; that would have the side effect of pushing down energy prices for the rest of the world even more, making it even harder for us to keep up.

Uranium absolutely could save us, but I think we're a couple decades out from the political will being there to really get a lot of nuclear online.

h3lp · 16h ago
Fracking was a brilliant invention, but may be reaching inherent limits---there are lawsuits between oil companies about fracking fluids from one well flooding and disabling other wells.
Gene5ive · 14h ago
Ice caps? Try human beings.

Increased Mortality: Projections indicate an additional 14.5 million deaths by 2050 due to climate-related impacts like floods, droughts, heatwaves, and climate-sensitive diseases (e.g., malaria and dengue).

Economic Losses: Global economic losses are predicted to reach $12.5 trillion by 2050, with an additional $1.1 trillion burden on healthcare systems due to climate-induced impacts. One study estimates that climate change will cost the global economy $38 trillion a year within the next 25 years.

Displacement and Migration: Over 200 million people may be displaced by climate change by 2050, with an estimated 21.5 million displaced annually since 2008 by weather-related events. In a worst-case scenario, the World Bank suggests this figure could reach 216 million people moving internally due to water scarcity and threats to agricultural livelihoods. Some researchers predict that 1.2 billion people could be displaced by 2050 in the worst-case scenario due to natural disasters and other ecological threats.

Food and Water Insecurity: Climate change exacerbates food and water insecurity, leading to malnutrition and increased disease burden, especially in vulnerable populations. For example, a significant increase in drought in certain regions could cause 3.2 million deaths from malnutrition by 2050. An estimated 183 million additional people could go hungry by 2050, even if warming is held below 1.6°C.

Mental Health Impacts: Climate change contributes to mental health issues like anxiety, depression, and PTSD, particularly in vulnerable populations and those experiencing climate disasters or chronic changes like drought. Extreme heat has been linked to increased aggression and suicide risk. Studies also indicate that children born today will experience a significantly higher number of climate extremes than previous generations, potentially impacting their mental well-being and sense of future security.

Inequality and Vulnerability: Climate change disproportionately affects vulnerable populations, including low-income individuals, people of color, outdoor workers, and those with existing health conditions, worsening existing health inequities and hindering poverty reduction efforts.

martin82 · 10h ago
Nice try, ChatGPT.

Not a single of these idiotic projections will ever come true.

softwaredoug · 19h ago
I specifically refer to the question of who will own the IP and economic might to lead in the clean energy market. Who will innovate? Who will build industrial capacity and know how, etc. It seems we’ve ceded the field

Not just strict energy production. Especially when it comes from sources of energy increasingly infeasible and unpopular.

pizzafeelsright · 21h ago
Whomever has more nuclear power generation will own energy. The cleanest energy is nuclear.
dangoor · 19h ago
Nuclear is clean, but has other drawbacks. "Solar+Storage is so much farther along than you think": https://www.volts.wtf/p/solarstorage-is-so-much-farther-alon...
godelski · 17h ago
This doesn't seem to be passing a sniff test

1) cherry picking the best case.

2) numbers seem off

  > The sunniest US city, Las Vegas, could get 98% of its power from solar+storage at a price of $104/MWh, which is higher than gas but cheaper than new coal or nuclear. It could get to 60% solar+storage at $65/MWh — cheaper than gas.
But according to this[0], the US average cost of nuclear is ~$32/MWh (2023). I think the subtle keyword is "new", which could make for a very fuzzy argument.

Or maybe prices are different in LV but that's a big differential. It's also mentioning it's the best case scenario for solar. So even then, maybe that's the best option for Las Vegas, but is it elsewhere?

World Nuclear also gives us some global numbers to help us see the larger range of costs [1]

  > LCOE figures assuming an 85% capacity factor ranged from $27/MWh in Russia to $61/MWh in Japan at a 3% discount rate, from $42/MWh (Russia) to $102/MWh (Slovakia) at a 7% discount rate, and from $57/MWh (Russia) to $146/MWh (Slovakia) at a 10% discount rate.
I don't think this means we shouldn't continue investing in solar and storage, but neither does it suggest taking nuclear off the table. This might be fine for LV or other areas in the Southwest, but unless those costs can be stable for the rest of the country I think we should keep nuclear as an option.

We shouldn't forget: it's not "nuclear vs solar" it's "zero carbon emitters vs carbon emitters". The former framing is something big oil and gas want you to argue, and that's why they've historically given funds to initiatives like the Sierra Nevada Club. If we care about the environment or zero emissions then the question isn't as simple as "nuclear vs solar" it is "what is the best zero carbon emitting producer given the constraints of the local region".

[0] https://www.statista.com/statistics/184754/cost-of-nuclear-e...

[1] https://world-nuclear.org/information-library/economic-aspec...

hn_throwaway_99 · 20h ago
Everything I've read recently has emphasized that new nuclear installations will have difficulty competing with solar and storage.

Having a non-emitting form of base load is important, and nuclear has a place there, but it many applications it's just not cost competitive with renewables.

saubeidl · 19h ago
Nuclear fission is more expensive per kilowatt than solar and forces you to go through a lot more trouble to contain risk.

Maybe if fusion was viable, that'll change, but until then nuclear just doesn't make any sense.

schrodinger · 10h ago
It’s true that new nuclear is more expensive than solar + battery on a per-kWh basis, and the regulatory/compliance overhead is significant. But solar is intermittent, and batteries only solve short-duration gaps—firm, zero-carbon baseload still matters. Existing nuclear is actually quite cost-effective and displacing it often leads to more fossil fuel use. Long-term, we likely need a mix: cheap renewables for bulk energy, and nuclear (or equivalent) for reliability.
jmyeet · 20h ago
I really don't understand HN's love affair with nuclear.

Uranium mining produces significant toxic waste (tailings and raffinates). Fuel processing produces toxic waste, typically UF6. There is some processing of UF6 to UF4 but that doesn't solve the problem and it's not economic anyway. Fuel usage produces even more waste that typically needs to be actively cooled for years or decades before it can be forgotten about in a cave (as nuclear advocates argue).

And then who is going to operate the plant? This administration in particular is pushing for further nuclear deregulation, which is terrifying. You want to see what happens without regulation? Elon Musk's gas turbines in South Memphis with no Clean Air permits that are spewing pollution [1].

That's terrifying because the failure modes for a single nuclear incident are orders of magnitude worse than any other form of power plant. The cleanup from Fukushima requires technologies that don't exist yet, will take decades or centuries and will likely cost ~$1 trillion once its over, if it ever is [2].

And who's going to pay for that? It's not going to be the private operator. In fact, in the US there's laws that limit liability for nuclear accidents. The industry's self-insurance fund would be exhausted many times over by a single Fukushima incident.

And then we get to the hand waving about Chernobyl, Fukushima and Three Mise Island. "Those are old designs", "the new designs are immune to catastrophic failure" or, my favorite, "Chernobyl was because of mismanagement in the USSR" like there wouldn't be corner-cutting by any private operator in the US.

And let's just gloss over the fact that we've built fewer than 700 nuclear power plants, yet had 3 major incidents, 2 of them (Chernobyl and Fukushima) have had massive negative impacts. The Chernobyl absolute exclusion zone is still 1000 square miles. But anything negative is an outlier that should be ignored, apparently.

And then we get to the impact of carbon emissions in climate change but now we're comparing the entire fossil fuel power industry vs one nuclear plant. It's also a false dichotomy. The future is hydro and solar.

and then we get to the massive boondoggle of nuclear fusion, which I'm not convinced will ever be commercially viable. Energy loss and container destruction from fast neutrons is a fundamental problem that stars don't have because they have gravity and are incredibly large.

I have no idea where this blind faith in nuclear comes from.

[1]: https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...

[2]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...

hardolaf · 19h ago
Wow. So you really know nothing about the technology and are just spreading fear. The Chernobyl exclusion zone is mostly safe for people now outside of the fact that Russia is current bombing Ukraine.

The issue with cleanup at Fukushima Daichii is one of money and political will, not one of technology. We've had the ability to clean up nuclear accidents since the 1950s.

Also, the future of power is increasingly looking like LNG plants which pump only slightly less radioactive carbon into the atmosphere than coal plants do.

godelski · 17h ago

  > with cleanup at Fukushima Daichii 
To add a small note here: the background level of radiation is fairly safe in most of the region. The danger (including in the Chernobyl region) is more about concern of small radioactive particulate. Things like your vegetables in your garden could become deadly because they formed around a hot material that was buried in the ground. Same can happen with rain runoff.

These are manageable, but expensive and still take care. You'd still want to arm everyone with a detector and get them to be in the habit of testing their food and water (highly manageable for public water or food).

jmyeet · 16h ago
The Chernobyl exclusion zone is relatively safe... to short, limited tours. There are radioactive and toxic particulates all over the place. Things like Cesium-137, which is both radioactive and toxic. Artifacts irradiated in the initial meltodwn and radioactive release (eg vehicles, buildings) remain dangerous to this day, like there are machine graveyards that are absolutely forbidden to entry for safety reasons.

> The issue with cleanup at Fukushima Daichii is one of money ...

Yes, about a trillion dollars. That's the point.

As for technology, I believe the removal of fuel rods and irradiating sand bags has only begun (with robots) in the last year. I don't believe they've fully mapped out what needs to be removed. It's not just the fuel but also the structure, such as the concrete pedestal the reactor was on (and melted through to).

Otherwise, you kinda make my point: hand waving away serious and expensive disasters with fervor bordering on the religious to essentially dismiss me as some kind of heretic.

saubeidl · 19h ago
Money and political will are in short supply everywhere. Who's to say you'd find it in the US after an accident? And why even bother when solar is cheaper and doesn't come with the same risk?
more_corn · 18h ago
Its astroturfing
barbazoo · 19h ago
> I really don't understand HN's love affair with nuclear.

s/HN/Individuals

7bit · 14h ago
You obviously have no idea how much destruction it causes to the environment to get the uranium out of the earth. Maybe educate yourself before putting such nonsense into the world.
more_corn · 18h ago
Nuclear takes 20 years to build and plants cost $10B.

Rooftop solar starts paying back instantly and can be deployed in $20k tranches. It also requires no additional grid infrastructure and decreases demand on non generating grid infrastructure.

Pretty sure it’s rooftop solar that wins the future.

2600 · 21h ago
It's part of the current administration's energy agenda, President Trump signed executive orders a couple of months ago, to increase nuclear energy capacity by 400% in the next 25 years, revising regulations, and expediting review and approval of reactor projects, which seems like the most effective strategy for expanding clean energy production.
atoav · 21h ago
A certain group of people keep saying that. But that particular idea of "clean" nuclear does not price in the 10.000 years of safe storage of nuclear waste materials (for the most dangerous HLW materials this number can go up to 100.000 years). Do you and your 3500 generations of ancestors volunteer to do this? Then it is cheap and clean. Otherwise it is yet another instance of "privatize the gains and socialize the externalities".

(And let's ignore the fact that humanity barely managed to organize anything that held even a mere 1000 years)

lupusreal · 20h ago
Nuclear waste is a complete non-issue. It's trivial to just let it sit around in a corner of the power plant's property for a century or two until somebody nuts up and dumps it down a bore shaft or into the ocean where it belongs.

There's no technical or economic problem here. The problem is completely one of PR, with ignoramuses thinking it's a big deal being the entire problem.

atoav · 2h ago
So you volunteer to take that material into your garage then? Give me your contacts.
potato3732842 · 20h ago
And just to be clear, it would be "a bore shaft", not "many bore shafts". The amount of nuclear waste generated per person per lifetime is so small you could pick it up and carry it. So a single well positioned mine with good geology could literally store all of it the US could generate for centuries.
atoav · 18h ago
Well price it in then. Storage cost per anum times the time it is needed + the bureaucratic cost to ensure it is there till the end of its lifetime.

I know that Germany is seeking a nuclear waste storage site (unsucessfully) for two decades now. So simple.

dingnuts · 20h ago
My understanding is that every other form of energy production has similar or worse concerns, including renewables due to the materials used to build and operate and decommission solar panels and windmills.

The argument you're making about waste has even led to the decommissioning of nuclear in Germany to be replaced with coal... burning coal also produces radioactive fly ash. Everything has tradeoffs!

I guess we could just give up on electricity entirely! That might save the planet

Kon5ole · 5h ago
>My understanding is that every other form of energy production has similar or worse concerns

You are suffering from a misunderstanding then. Maybe several, since Germany has cut their coal use by more than half since Fukushima. (262 TWh from coal in 2011, 108 in 2024).

Nuclear waste and the efforts it requires to manage is really orders of magnitude worse than other kinds of waste produced in energy production. Even if it can be argued that coal is second, it's a distant second, and nobody replaces nuclear with coal.

rini17 · 5h ago
That is purely psychological perception. Noone seriously calculated that nuclear waste would be orders of magnitude worse than coal per TWh. Neither safety, expense to manage nor other externalities.
Kon5ole · 4h ago
>Noone seriously calculated that nuclear waste would be orders of magnitude worse than coal per TWh

Not sure what you mean here but I agree that nobody was able to predict what the cost of nuclear would actually end up being when they first started with it in the 50s.

EDF was bailed out for 50 bn despite having neglected maintenance so badly that half their plants were offline in 2022, and the first thing France did when they took over was to double the purchase price. If that's enough remains to be seen.

If you mean that you disagree that nuclear is an order of magnitude worse per TWh, then perhaps you don't know how much more energy we get from coal, or how much money, time and effort is spent on nuclear?

Just as an illustration, during the 40 years it was active, Fukushima generated as much electricity in total as the world gets from coal in one week.

atoav · 2h ago
No it isn't. All current nuclear waste models purely rely on geology and perfect engineering and assume that 100 to 300 years in the future those sites need zero staff, zero maintenance and zero monitoring.

Which is of course a "cool" assumption to make if you're profiting from this being the conclusion today. Critics of these models (like me) are sceptical of that overly opportunistic conclusion, especially since the timeframes involved are so long and the storage still needs to be maintained long after the profits stopped for one reason or another. I am not saying that this can't be done, I say the current models are insufficient and rely on future generations "dealing with it" somehow.

If you can convince me my worry is unfounded, I'd be happy to hear why I am worrying too much or why we can be certain that this works out as we wish it would.

hardolaf · 19h ago
Renewables outside of solar farms where solar is installed at ground level, also have a significantly higher death and serious injury rate than nuclear does per GWH produced even after including the use of nuclear weapons and nuclear weapons testing in the numbers to make nuclear look worse.
subhobroto · 20h ago
> wonder in 20 years what happened as other countries lead in energy

Can you clarify what leading in energy means? And what concerns do you have?

Do you mean we, in the U.S. are in a tarpit of regulations and red tape that makes setting up a nuclear power plant up impossible? Or something else?

IMHO, leading in energy also needs to take into account where that energy takes us and what it unlocks. I immigrated to the U.S. so I am extremely bullish so do consider that below.

My California perspective is that energy is going to be even more decentralized. I have not paid an electric bill in years and get a check from my utility once a year where they pay me wholesale rates for my net export. I net export because I rarely use any meaningful energy at night that my 5kwH battery pack cannot provide. Once battery prices fall even further, I will dump everything into my local storage and draw no gross power from my utility at all. For all practical purposes, I will be off grid.

Anyone in California has the technological ability to get there as well. The utilities dump GWh of solar energy because we produce so much!

The issue we have in the U.S. is one of horrible policies and regulation.

Your typical townhouse in the city block isn't going to be able to put 20 panels on their roof because their HOA is going to throw a fit. The owner won't be allowed to install it themselves and would have to pay an electrician tens of thousands of dollars because the city isn't going to permit it otherwise. The obstacle of installing $5k worth of parts is incredibly disappointing.

From my perspective, technologically, solar energy is going to become cheaper as storage continues to fall in price.

This will empower increasing productivity. In my case, once the GPU market becomes consumer friendly and less constrained, or fundamentally different LLMs are released that are CPU friendly but I can't imagine that possibility yet, I will buy more GPUs and increase my self host LLM capacity. Today, as of right now I an getting "Insufficient capacity" errors from AWS attempting to launch a g6.2xlarge cluster and puny 24GB GPUs cost a lot making renting from AWS a better choice. The responses from the coding models blow my mind. They often meet or beat the kind of code I would expect from a junior engineer I would have to pay $120k/yr for and that would be a cheap engineer in SoCal. A GPU cluster including running costs would be fraction of that so I would be able to expand quicker with less.

Whole offices are going to become more compact and continue to become decentralized or even remote. Their carbon footprint is then going to go practically zero (no office security patrol, no HVAC, no heating, etc). More people will be able to start businesses (higher GDP) with less, increasing the GDP per Co2 emissions.

My childhood friends in the E.U who are in the same space that I am in are less enthusiastic. My friends in Germany who bought a hundred PV panels is not happy at all.

So which country will lead in energy and what would they be doing?

infamouscow · 21h ago
People love using their pet issue as the sole explanation for why something did or didn't happen. It's never that simple.

My boomer boss thinks writing tests is unnecessary and slows shipping down. It might be true, but it fails to appreciate the full scope of the problem.

No comments yet

LorenDB · 1d ago
> Encourage Open-Source and Open-Weight AI

It's good to see this, especially since they acknowledge that open weights is not equal to open source.

rs186 · 1d ago
Without providing actual support like money, the government saying they encourage open-* AI is no more meaningful than me saying the same thing.

In fact, if you open the PDF file and navigate to that section, the content is barely relevant at all.

SkyMarshal · 22h ago
We're clearly in an era where the US Govt simply doesn't have enough money to throw at everything it wants to encourage, and needs to develop alternate means of incentivizing (or de-disincentivizing) those things. Sensible minimal regulation is one, there may be others. Time to get creative and resourceful.
AvAn12 · 22h ago
The budget is the policy, stripped of rhetoric. What any government spends money on IS a full and complete expression of its priorities. The rest is circus.

What increased and decreased in the most recent budget bill? That is the full and complete story.

If no $$ for open source or open weight model development, then that is not a policy priority, despite any nice words to the contrary.

berbec · 22h ago
The US has been continuously running a budget deficit for decades (brief blip at the end of Clinton/beginning of W Bush). This is more of an "epoch" than "era". I love the idea of incentives that aren't tax breaks!
mdhb · 21h ago
It’s genuinely bizarre to read a comment like this which seems to imply there is some kind of grand strategy behind this when the reality is and always has been “own the libs”.

They very clearly have no idea what the fuck they are doing they just know what other people say they should do and their toddler reaction is to do the opposite.

_DeadFred_ · 19h ago
AI, which they are hoping takes over EVERYTHING, is probably one of the worthwhile ones for government to be involved in. If it has the chance to be this revolutionary, which would be better:

The government owning the machine that does everything.

Tech bros, with their recent love of guruship, with their willingness to do any dark pattern if it means bigger boats for them, owning the entire labor supply in order to improve the lives of 8 bay area families.

throw14082020 · 23h ago
Even if they did provide more money, it doesn't mean it'll go to the right place. Government money is not the solution here. Money is already being spent.
jonplackett · 1d ago
How can this work with their main goal of assuring American superiority? If it’s open weights anyone else can use it too.
alganet · 1d ago
It doesn't say anything about open training corpus of data.

The USA supposedly have the most data in the world. Companies cannot (in theory) train on integrated sets of information. USA and China to some extent, can train on large amounts of information that is not public. USA in particular has been known for keeping a vast repository of metadata (data about data) about all sorts of things. This data is very refined and organized (PRISM, etc).

This allows training for purposes that might not be obvious when observing the open weights or the source of the inference engine.

It is a double-edged sword though. If anyone is able to identify such non-obvious training inserts and extract information about them or prove they were maliciously placed, it could backfire tremendously.

vharuck · 1d ago
So DOGE might not be consolidating and linking data just for ICE, but for providing to companies as a training corpus? In normal times, I'd laugh that off as a paranoiac fever dream.
alganet · 8h ago
Companies can change hands easier than governments. I would assume the US isn't sharing anything exclusive with private commercial entities. Doing so would be a mistake in my opinion.
dudeinjapan · 23h ago
If AI were trained on troves of personal info like SSNs, emails, phones then the leakage would be easily discovered and the model would be worthless for any commercial/mass-consumption purpose. (This doesnt rule out a PRISM-AI for NSA purposes of course.)
alganet · 8h ago
The way you describe it make PRISM sound like a contact book. I think it more like unwilling facebook.
sunaookami · 1d ago
That's exactly what the goal is: That everyone uses American models over Chinese models that will "promote democratic values".
mdhb · 21h ago
From a government that has made it extremely fucking clear that they aren’t ACTUALLY interested in the concept of democracy even in the most basic sense.
saubeidl · 23h ago
The ultimate propaganda machine.
somenameforme · 1d ago
The idea is to dominate AI in the same way that China dominates manufacturing. Even if things are open source that creates a major dependency, especially when the secret sauce is the training content - which is irreversibly hashed away into the weights.
guappa · 23h ago
I think the only way to dominate AI is to ban the use of any other AI…
kevindamm · 22h ago
There can be infrastructure dominance, too. It's difficult to get accurate figures for data center size across FAANG because each considers those figures to be business secrets but even a rough estimate puts the US data centers ahead of other countries or even regions.. the US has almost half of the world's data centers by count.

Transoceanic fiber runs become a very interesting resource, then.

somenameforme · 4h ago
In every domain that uses neural networks, there always reaches a point of sharp diminishing returns. You 100x the compute and get a 5% performance boost. And then at some point you 1000x the compute and your performance actually declines due to overfitting.

And I think we can already see this. The gains in LLMs are increasingly marginal. There was a hugeeeeeeee jump going from glorified markov chains to something able to consistently produce viable output, but since then each generation of updates has been less and less recognizable to the point that if somebody had to use an LLM for an hour and guess its 'recency'/version, I suspect the results would be scarcely better than random. That's not to say that newer systems are not improving - they obviously are, but it's harder and harder to recognize those changes without having its immediate predecessor to compare against.

HPsquared · 1d ago
They see people using DeepSeek open weights and are like "huh, that could encode the model creators' values in everything they do".
somenameforme · 1d ago
I doubt this has anything to do with 'values' one way or the other. It's just about trying to create dependencies, which can then be exploited by threatening their removal or restriction.

It's also doomed to failure because of how transparent this is, and how abused previous dependencies (like the USD) have been. Every major country will likely slowly move to restrict other major powers' AI systems while implicitly mandating their own.

nicce · 1d ago
Can a model make so sophisticated propaganda or manipulation that most won’t notice it?
ChrisRR · 1d ago
Well just look at the existing propaganda machines online and how annoyingly effective they are
WHA8m · 21h ago
Be specific. "Well just look at <general direction>" is a horrible way of discussing.

And about your thought: I disagree. When I look at those online places, I see echo chambers, trolls and a lack of critical thinking (on how to properly discuss a topic). Some parts might be artificially accelerated, but I don't see propaganda couldn't be fought. People are just coasting, lazy, group thinking, being entertained and angry.

ted_dunning · 14h ago
Lack of critical thinking is a key success indicator for propaganda.

Propaganda is less about what it says, but more about how it makes people _feel_.

If you get the feels strong enough, it doesn't matter what you say. The game is over before you start.

pydry · 1d ago
Most western news propaganda isnt especially sophisticated and even the internally inconsistent narratives it pushes still end up finding an echo on hacker news.
cardamomo · 1d ago
I wonder how this intersects with their interest in "unbiased" models. Scare quotes because their concept of unbiased is scary.
rtkwe · 22h ago
Elon gives an unvarnished look at what they mean by 'unbiased' with respect to models. It's rewriting the training material or adding tool use (searching for Musk's tweets about topics before deciding it's output) to twist the output into ideological alignment.
rayval · 1d ago
"unbiased", in the world of realpolitik, means "biased in a manner to further my agenda and not yours".
HPsquared · 1d ago
See also "fair".
ActorNightly · 1d ago
Its all meaningless though.
jsnider3 · 21h ago
No, it's bad, since we will soon reach a point where AI models are major security risks and we can't get rid of an AI after we open-source it.
rwmj · 21h ago
"major security risks" as in Terminator style robot overlords, or (to me more likely) they enable people to develop exploits more easily? Anyway I fail to see how it makes much difference if the models are open or closed, since the barrier to entry to creating new models is not that large (as in, any competent large company or nation state can do it easily), and even if they were all closed source, anyone who has the weights can run up as many copies as they want.
shortrounddev2 · 21h ago
The risk of AI is that they are used for industrial scale misinformation
rwmj · 21h ago
Definitely a risk, and already happening, but I presume mostly closed source AIs are used for this? Like, people using the ChatGPT APIs to generate spam; or Grok just doing its normal thing. Don't see how the open vs closed debate has much to do with it.
patcon · 21h ago
You can't see how a hosted private model (that can monitor usage and adapt mechanisms to that) has a different risk profile than an open weight model (that is unmonitorable and becomes more and more runnable on more and more hardware every month)?

One can become more controlled and wrangle in the edge-cases, and the other has exploding edges.

You can have your politics around the value of open source models, but I find it hard to argue that there aren't MUCH higher risks with the lack of containment of open weights models

rwmj · 21h ago
You're making several optimistic assumptions: The first is that closed source companies are interested in controlling the risk of using their technology. This is obviously wrong: Facebook didn't care its main platform enabled literal genocide. xAI doesn't care about the outputs of their model being truthful.

The other assumption is that nefarious actors will care about any of this. They'll use what's available, or make their own models, or maybe even steal models (if China had an incredible AI, don't you think other countries would be trying to steal the weights?). Bad actors don't care about moral positions, strangely enough.

shortrounddev2 · 21h ago
Governments are able to regulate companies like OpenAI and impose penalties for allowing their customers to abuse their APIs, but are unable to do so if Russia's Internet Research Agency is running the exact same models on domestic Russian servers to interfere in US elections.

Of course, the US is a captured state now and so the current US Government has no problem with Russian election interference so long as it benefits them

BeFlatXIII · 21h ago
You don't need frontier models to do that. GPT-3 was already good enough.
bigyabai · 1d ago
Good to see what? "Encourage" means nothing, every example listed in the document is more exploitative than supportive.

Today, Google and Apple both already sell AI products that technically fall under this definition, and did without government "encouragement" in the mix. There isn't a single actionable thing mentioned that would promote further development of such models.

artninja1988 · 1d ago
It's certainly more encouraging than the tone from a few months/ years ago, when there was talk of outright banning open source/ weigh foundational weight models
bigyabai · 19h ago
You literally cannot ban weights. You can try, but you can't. Anyone threatening to do so wasn't doing it credibly.
hopelite · 1d ago
It’s primarily motivated by control; similar to how all narcissistic, abusive, controlling, murderous, “dominating” (as the document itself proclaims) people and systems are. That is not motivated by magnanimity and genuine shared interest or focus on precision and accuracy.

The controllers of the whole system want open weights and source to make sure models aren’t going to expose the population to unapproved ideas and allow the spread of unapproved thoughts or allow making unapproved connections or ask unapproved questions without them being suitably countered to keep everyone in line with the system.

belter · 1d ago
Only weights that are not Woke according to what was stated. And reduce those weights on the neural net path to the Epstein files please.
AlanYx · 1d ago
The most important thing here IMHO is the strong stance taken towards open source and open weight AI models. This stance puts the US government at odds with some other regulatory initiatives like the EU AI Act (which doesn't outlaw open weight models and does have some exemptions below 10²⁵ FLOPS, but still places a fairly daunting regulatory burden on decentralized open projects).
rs186 · 1d ago
If you go through the "Recommended Policy Actions" section in the document, you'll realize it's mostly just empty talk.
AlanYx · 23h ago
IMHO it's not empty talk; a lot of the elements of the plan reinforce each other. For example, it's pretty clear that state initiatives that were aiming to place regulatory thresholds like the 10^26 FLOPS limit in Calfornia's SB1047 are going to be targets under this plan, and US diplomatic participation in initiatives like the Council of Europe AI treaty are now on the chopping block. There are obviously competing perspectives emerging globally on regulation of AI, and this plan quite clearly aims to foster one particular side. It doesn't appear to be hot air.

For open source/open weight models it's particularly important because until now there wasn't a government-level strong voice countering people like Geoff Hinton's call to ban open source/open weight AI, like he articulates here: https://thelogic.co/news/ai-cant-be-slowed-down-hinton-says-...

wredcoll · 1d ago
I don't know if this counts as amazing optimism or just straight up blinders if that's your takeaway compared to the emphasis placed on non-renewable energy and government enforced ideology.
MrBuddyCasino · 1d ago
Current AIs are anything but politically neutral.
sbelskie · 23h ago
So the government should step in to dictate what neutrality means?
troyvit · 21h ago
Seriously. Is _that_ what it means to have a conservative government? Because I thought it meant they would keep their hands off the market. This is straight from the PDF though:

"Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change."

AdamN · 21h ago
Isn't that why we have government? We have judges to make the final say on right and wrong and what the punishments are for transgressions, legislatures to make laws and allocate money based on the needs of the constituents, and an executive function to carry out the will of the stakeholders.

Clearly there are terrible governments but if it's not government tackling these issues then there will be limited control by the people and it will simply be those with the most money define the landscape.

palmfacehn · 19h ago
Does the individual consumer have any agency in which AI services he chooses to consume?

As I understood the original premises of the US gov, it was to be constitutionally limited in scope. Now I know that ship has sailed a long time ago, but I don't think it follows that we have a gov. to centrally plan AI content as right or wrong.

saubeidl · 23h ago
There is no such thing as politically neutral. Whatever you perceive as such is just a reflection of your own ideology.
logicchains · 21h ago
There absolutely is; formally speaking, statements can be categorised into normative, saying how things should be, and positive, saying how things are. A politically neutral AI would avoid making any explicit or implicit normative statements.
palmfacehn · 19h ago
>...and positive, saying how things are

This presumes that the AI has access to objective reality. Instead, the AI has access to subjective reports filed by fallible humans, about the state of the world. Even if we could concede that an AI might observe the world on its own terms, the language it might use to describe the world as it perceives it would be subjectively defined by humans.

saubeidl · 19h ago
That is exactly it. Humans are inherently subjective beings, seeing everything through their ideology, and as a result LLMs are, too.

They will always be a computer representation of the ideology that trained it.

DonHopkins · 21h ago
AI simply not openly and proudly declaring itself MechaHitler while spreading White Supremacist lies and Racist ideology would be one small step in the right direction.
DonHopkins · 21h ago
That's right, all AIs are just the same, both sides do it, it's a true equivalence. Claude just declared itself MechaObama, and OpenAI is now calling itself MechaJimmyCarter, and Gemini is now calling itself MechaRosieODonnell.
mlsu · 1d ago
In the energy section, they talk about using nuclear fusion to power AI... but not solar. What a joke.
josh-sematic · 1d ago
Technically solar power is just fusion power transmitted via photons across space. Maybe solar qualifies ;-)
tombakt · 1d ago
Technically most sources of available energy on or near the planet are the output of fusion in some way, so this tracks.
tbrownaw · 1d ago
Everything except geothermal and fission.

Unless you count where the fissionable elements came from, in which case you're only left with the portion of geothermal that's from gravity (residual heat from the earth compacting itself into a planet).

Jensson · 1d ago
Tidal waves comes from earths rotation, so not fusion nor fission.
markburns · 23h ago
what set off the spinning?
gattr · 21h ago
Earth's spin comes from the parent molecular cloud which formed the Solar System (including any impacts during the protoplanetary phase.) And that ultimately from density fluctuations after Big Bang, and the way they led to coalescence of galaxies and galaxy clusters.
gattr · 21h ago
To be nitpicky, our uranium and thorium were made via r-process (rapid neutron capture), which is not the kind of fusion occurring in the Sun at present.

[1] https://en.wikipedia.org/wiki/R-process

davidmurdoch · 1d ago
How much land mass would need to be covered by solar panels to power this future AI infrastructure. Yes, I'm implying that solar would be impractical, but I'm also genuinely curious.
Kon5ole · 1d ago
Your implication is misguided, solar is in fact the most practical way to add more electricity for most countries.

The US generated an additional 64Twh of solar in 2024 compared to 2023. To get the same amount from nuclear you would need to build 5 large reactors in one year.

As for land mass, we can re-use already spent land mass, like rooftops, parking lots, grazing farmland and such. Solar can also be placed on lakes.

So for the foreseeable future there is no actual need for new land to be dedicated to solar.

542354234235 · 1d ago
Since America is so in love with car infrastructure, just turning open parking lots into covered lots would be more than enough.

Just converting all Walmart parking to covered solar would meet almost half of all US electrical demand.

4,070,000,000,000 kWh US electric use in 2022

Using 330W panels it would require 8,447,488 panels (4,070,000,000,000 kWh / (330W * 4 hours/day * 365 days/year)) which is 164,726,016 sq ft at 19.5 sq ft per panel.

Walmart has 4,612 stores in the US, averaging 1,000 parking spaces per store, and 180 sq ft per parking space (does not include driving lanes, end caps, etc.) giving us 830,160,000 sq ft.

matwood · 23h ago
Also doesn't count continuing to put solar on roofs.
glitchc · 23h ago
Even though I think solar is impractical as a primary source for various reasons, it doesn't take a lot.

David MacKay in "Sustainable Energy: Without the Hot Air" did a calculation circa 2010. To fulfill the world's energy needs back then, a 10 km^2 area in the Sahara desert would be sufficient. Even if you scaled that to 100 km^2, it's absolutely tiny on a global scale, and panels have only become more efficient since then.

The challenge of course is storage and distribution, but yeah, in terms of land area, it's not much.

ancillary · 21h ago
I was curious about this number, so: 10 km^2 is 10mil square meters, Googling suggests that the theoretical maximum energy captured by a square meter of solar panel is well under 0.5 kW, so well under 12 kWh per day. Say 10 kWh for neatness. Then multiplying by 10mil gives 100mil kWh. More Googling suggests that 10 TWh is a comfortable lower bound for daily world energy usage, but 100mil kWh is 0.1 TWh.

So maybe 1000 km^2 is more like right order of magnitude. That's still tiny, about Hong Kong-sized. Even 100000 km^2 is about South Korea.

saalweachter · 20h ago
I'm guessing it was supposed to be (10 km)^2, not 10 km^2.
discordance · 1d ago
It's worth considering total lifecycle use of water (mining, production and operation) for nuclear and solar.

Solar: ~300-800 L/MWh [0]

Nuclear: ~3000 L/MWh [1]

0: https://iea-pvps.org/wp-content/uploads/2020/01/Water_Footpr...

1: https://www-pub.iaea.org/MTCD/Publications/PDF/P1569_web.pdf

davemp · 1d ago
That’s not really useful information. The nice thing about water is that it’s usually still water after it’s “used”.

The question how much is used for mining slurry or chemical baths.

Those 3000L/MWh might very well be more environmentally friendly than solar because most of it’s used for cooling.

bluefirebrand · 1d ago
Water we have plenty of. We can desalinate as much as we need to
NewJazz · 1d ago
Build subsurface wells and responsible brine dispersion infrastructure then come back and tell me we can desalinate as much as we need to.
HPsquared · 1d ago
If you put the brine back into the sea, and later put the waste water back into the same sea, doesn't it balance out? Also, the sea is pretty big.
jasonjayr · 1d ago
Perhaps, but locally, the higher brine concentration will cause issues.
NewJazz · 19h ago
Yes, hence my use of the word "dispersion". Over a wide enough area, the brine shouldn't have a noticeable impact on sea life. But concentrated release can be really damaging.
cbsmith · 1d ago
Yeah, but you need energy to desalinate so...
HPsquared · 1d ago
How does it compare to ~3000 L/MWh? I assume it's a rounding error.

edit: Desalination uses 4 kWh per cubic metre of water. That is, it would yield 250,000 L/MWh.

aredox · 1d ago
Nuclear reactors regularly shut down because the water from the nearby river is already too hot.

https://www.euronews.com/2025/07/02/france-and-switzerland-s...

dismalpedigree · 1d ago
3,000-4,000 acres per GW of production capacity in the US Southwest. According to AI :)

Considering how little use there is for most of that land anyways, it seems like a good option to me.

Also AI training seems like the perfect fit for solar. Run it when the sun is shining. Inference is significantly less power hungry, so it can run base load 24/7.

creato · 1d ago
> Also AI training seems like the perfect fit for solar. Run it when the sun is shining. Inference is significantly less power hungry, so it can run base load 24/7.

If you're talking about just not running your data center when the sun isn't out, that effectively triples the cost of the building+ hardware. It would require a hell of a carbon tax to make the economics of this make sense.

sim7c00 · 1d ago
the sun is always shining.
rapsey · 1d ago
> Inference is significantly less power hungry, so it can run base load 24/7.

All major AI providers need to throttle usage because their GPU clusters are at capacity. There is absolutely no way inference is less power hungry when you have many thousands of users hammering your servers at all times.

blitzar · 1d ago
Furthermore NVIDIAs 80% profit margin makes idling your biggest capital expense a huge ROI problem. Google and Apple should have a big advantage in this regard.

If the balance between capital outlay and running costs was more balanced - then optimising the running cost becomes a big line item on the accounts.

ReptileMan · 1d ago
>How much land mass would need to be covered by solar panels to power this future AI infrastructure

Probably zero agricultural if you mandate all rooftops to be solar. And all parking lots to be covered with solar roofs.

andsoitis · 1d ago
Nothing stops the AI companies from using only energy from renewable sources, right?
NewJazz · 1d ago
Tariffs, regulatory quick sand, political pressure...
LinXitoW · 1d ago
Are those really going to be bigger for renewables than for nuclear power?
NewJazz · 20h ago
Probably in this administration at least tariffs will be more of an obstacle for renewables.
andsoitis · 1d ago
Sure anyone can come up with hypothetical threats. What’s your concrete evidence to suggest this will happen?
NewJazz · 20h ago
andsoitis · 11h ago
opening sentence in that article: "... a key step toward wrapping up a year-old trade case in which American manufacturers accused Chinese companies of flooding the market with unfairly cheap goods."
bluefirebrand · 1d ago
Nothing other than the fact that renewables won't be able to keep up if the AI demand keeps growing the way it has been
polski-g · 23h ago
They just buy a contract from a power distribution company. They don't care where it comes from.

If you want the PD companies to have a different blend, then they need carrots and sticks.

myaccountonhn · 21h ago
Demand is too high, same goes for nuclear which takes too long to build.
newsclues · 1d ago
The joke is my hometown that put acres of solar on prime farmland.

Solar is great for rooftops of houses, it’s not really great to run a DC 24/7 without batteries.

sim7c00 · 1d ago
it needs to be better connected over larger distances i guess. Some 'sunny' countries around the equator are working on it. laying gridlines to other less sunny places and trying to offer solar to reduce carbon taxes or whatever.

i know Saudi, Morocco and China are all massively dumping panels into their deserts, likely more places too. these are great places to put them as it has less impact on environment (less wildlife etc.) and it's pretty much always sunny during the daytime, so it's high efficient per m/2 comparted to colder more cloudy places.

Morocco already is connected for energy providing to Europe via Spain afaik, though i think that is currently not used yet, so they are in a good position to leverage that as power demands surge across EU datacenters trying to compete in AI :'D (absolutely no clue if they will actually go that route but it seems logical!)

wyager · 1d ago
Well yeah, AI power consumption doesn't match the solar production curve.
mlsu · 14h ago
I'll tell ya, it certainly doesn't match the nuclear fusion production curve!
andyferris · 1d ago
That's interesting - I would generally like to use something like Cluade Code heavily during work hours and sparsely otherwise. Plus I assume most LLM-for-knowledge-work-at-industrial-scale demand will be similar as these datacentres are built out.
saalweachter · 20h ago
I mean, it could.

As we build out solar, daytime power will become cheaper than nighttime power.

Some people will eventually find it economical to time-shift their consumption to daytime hours, including saving any non-interactive computation for those hours, and shutting down unneeded compute at night.

jcattle · 23h ago
[citation needed]
foxglacier · 1d ago
America has a few time zones to move the peaks around in a little bit. The world has plenty. Luckily AI power consumption doesn't have to be located where the consumer is.
nsypteras · 21h ago
"Counter Chinese Influence in International Governance Bodies" and grouping them in with US "adversaries" and "rivals" is quite undiplomatic language to throw in under "Lead in International AI Diplomacy and Security" section. Diplomacy with China should be an important part of this initiative but will inevitably be bungled.
mkolodny · 20h ago
Even if it’s not perfect, I’m happy to see there’s a focus on AI Security. NIST has been a reliable producer of quality international standards for cybersecurity. Hopefully this action plan will lead to similarly high quality recommendations for AI Security.
adestefan · 20h ago
The language lets you get around a bunch of pesky laws by declaring it a "national defense emergency."
shortrounddev2 · 21h ago
China is an adversary of the West, and leading in international security means posing a challenge (or, in an ideal world, a better alternative) to Chinese influence on the international stage.
mensetmanusman · 20h ago
It’s necessary to put pressure on trying to prevent a Taiwan invasion.
anonyonoor · 1d ago
I've seen several European initiatives similar to this before, and the same question is always asked: what does this actually do?

People (at least on HN) seem to be in agreement the Europe is too regulatory and bureaucratic, so it feels fair to question the practicality of any American initiatives, as we do for European ones.

What does this document practically enact today? Is there any actual money allocated? Deregulation seems to be a theme, so are there any examples of regulations which have been cleansed already? How about planning? This document is full of directives and the names of federal agencies which plan to direct, so what are the actual results of said plans that we can see today and in the coming years?

breakingcups · 1d ago
I, for one, dont't agree with the idea that Europe is too regulatory and bureaucratic. I welcome my rights as a consumer and human being being safeguarded at the cost of a small amount of profit.
omcnoe · 1d ago
Registering a company in Germany: you must visit a notary in person with your incorporation documents, and sit there while the notary reads aloud your incorporation to you. This is to "ensure that you fully understand the contract" even as a foreigner who doesn't speak corporate-legalese-German. Minimum capital deposit of €25,000.

Registering a company in US (Delaware) can be achieved in as little as 1 hour.

Getting married in Germany, particularly between a German and a foreigner, is anything from a 6 month to 2 year process, involving significant expenses, notarization/translation of documents. Some documents expire after 6 months, so if the government bureaucrats are too slow you need to get new copies, translated again, notarized again, and try to re-submit.

This isn't protecting human rights, it's supporting a class of bureaucrats/notaries/translators/clerks and making life more difficult for ordinary people. It's also a form of light racism that targets foreigners/migrants by imposing more difficult bureaucratic requirements and costs on them compared to by birth citizens.

myaccountonhn · 21h ago
In Sweden registering a company is as simple as filling out a form online. Same goes for taxes, my partner is from US and each year filling in taxes is a headache. Here? Two clicks and I'm done.
AdamN · 21h ago
That's a Germany issue. Getting married in Denmark is straightforward and registering a company in Lithuania is also straightforward. There's nothing European about that issue - it's just how Germany handles this stuff.
WHA8m · 21h ago
> It's also a form of light racism that targets foreigners/migrants by imposing more difficult bureaucratic requirements and costs on them compared to by birth citizens.

How is having a different process for foreigners racist? Criticize it if you will, but calling it racist is crazy. Even "light racist" - whatever that means. Bureaucracy in Germany is notoriously slow for all people. Foreigners going through a different process makes it worse. I understand that. Nevertheless racism is a problem that exist and is prevalent (Germany is far from an exception here) and IMO you make it more difficult to improve in the right direction by (seemingly) calling every problem of foreigners racist.

joncrane · 8h ago
I may be extrapolating here, but the part in the comment you're replying to that says

>so if the government bureaucrats are too slow

I wonder if there are certain types of names that make the bureaucrats work more slowly.

WHA8m · 5h ago
Is there ANY substance behind such a hypothetical question? If not, I don't want to hear about it. It's a sensitive topic and you're doing no one a favor.
saubeidl · 1d ago
Registering a company in Estonia: Three clicks with your e-resident ID, available to anyone.

Europe isn't just Germany.

omcnoe · 1d ago
Estonia is specifically known as one of the easiest EU countries to incorporate in. And I'll note that Estonian e-resident ID requires collecting physical card from inside Estonia or a local Estonian embassy.

Europe isn't just Germany, but the process is nearly as bad in France and Italy too, and together that's over 50% of EU GDP suffering from intense domestic corporate bureaucracy.

askonomm · 22h ago
I'll add that in Estonia you can also get married and divorce entirely online with a few clicks. No need to show up anywhere, no need to wait for a long time. More and more countries in EU are getting easier in that sense - Malta has many online-everything facilities, Ireland as well. The big countries such as France, Spain, Italy seem to suffer from corruption and bureaucracy, but smaller countries tend to do a lot better - Scandinavia, Baltic countries, etc. Though of course in most places you have to become a resident, or at the very least be a EU national, as only Estonia has e-residency, as far as I know.
saubeidl · 23h ago
Just like Delaware is specifically known as one of the easiest US states to incorporate in. Your point being? It's not like you need to be Estonian to start an Estonian company.
Karawebnetwork · 22h ago
Important follow-up page to the US AI Action Page:

"PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT"

https://www.whitehouse.gov/presidential-actions/2025/07/prev...

> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.

nickpsecurity · 19h ago
It's worth mentioning because the AI developers have been using alignment training to make AI's see the world through the lens of intersectionality. That ranges from censoring what those philosophies would censor to simply presenting answers like they would. Some models actually got dumber as they prioritized indoctrination as "safety" training. It appears that many employees in the companies think that way, too.

Most of the world, and a huge chunk of America, thinks in different ways. Many are not aware the AI's are being built this way either. So, we want AI's that don't have a philosophy opposite of ours. We'd like them to either be more neutral or customizable to the users' preferences.

Given the current state, the first steps are to reverse the existing trend (eg political fine-tuning) and use open weights we can further customize. Later, maybe purge highly-biased stuff out of training sets when making new models. I find certain keywords, whether liberal or conservative, often hint they're going to push politics.

Karawebnetwork · 18h ago
Unconscious bias is not about pushing a political agenda it is about recognising how hidden assumptions can distort outcomes in every field, from technology to medicine. Ignoring these biases does not make systems more neutral, but often less accurate and less effective.
nickpsecurity · 12h ago
What I was talking about was forcing one's conscious biases... political agenda... on AI models to ensure they and their users are consistent with them. The people doing that are usually doing it in as many spaces as they can via laws, policies, hiring/promotion requirements, etc. It's one group trying to dominate all other groups.

Their ideology has also been both damaging and ineffective. The AI's they aligned to it too much got less effective at problem solving but were very, politically correct. Their heavy handed approach in other areas has led to such strong pushback that Trump made countering it a key part of his campaign. Many policy reversals are now happening in this area but that ideology is very entrenched.

So, we'd see a group pretrain large AI's. Then, the alignment training would be neutral to various politics. The AI would simply give good answers, be polite in a basic way, and that's it. Safety training wouldn't sneak in politicized examples either.

_DeadFred_ · 16h ago
Yes... totally agree that AI's not being allowed to train on Heinlein or any references to his scifi work will 'improve AI output' now that the Government declared including his works is restricted as it covered the exploration of trans identity, how gender impacts being human, etc.

2025 America, where we can't handle the radical pushing of thought by Heinlein in the late 1950s. Unbelievable.

Any Government comment periods going forward I will be asking if the government agency made sure AIs used were not trained on Heinlein or any discussions relating to him to ensure that 'huge chunks of America's desire to exclude trans and to make sure our AIs are the best possible AIs and don't have extremist 1950s agitprop scifi trans thought thinkers like Heinlein included.

nickpsecurity · 12h ago
I enjoyed Robert Heinlin's work. I'd probably keep it in my training set if copyright allowed.

What I might drop are the many articles with little content that strictly reiterate racist and sexist claims from intersectionality. The various narratives, like how black people had less of X, they embed in so many news reports. It usually jars our brain, too, since the story isn't even about that. They keep forcing certain topics and talking points into everything hoping people will believe and repeat it if they hear it enough. The right-wing people do this on some topics, too.

I'd let most things people wrote, even some political works on many topics, into the training set. The political samples would usually be the best examples of those ideologies, like Adam Smith or Karl Marx. Those redundant, political narratives they force into non-political articles would get those pages deleted. If possible, I'd just delete those sections containing the random tangent. For political news, I'd try to include a curated sample with roughly equal amounts of left and right reports with some independents thrown in.

So, only manipulative content that constantly repeats the same things would get suppressed. Maybe highly-debated topics, too, so I could include a small number of exemplars. Then, reduce the domination of certain groups in what politics were there. Then, align it to be honest and polite but no specific politics.

I'm very curious what a GPT3-level AI would say about many topics if trained that way instead of Progressive-heavy training like OpenAI, etc.

timoth3y · 1d ago
> Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias

If foundation model companies want their government contracts renewed, they are going to have to make sure their AI output aligns with this administration's version of "truth".

shaky-carrousel · 1d ago
I predicted that here, but I got a negative vote as a punishment, probably because it went against the happy LLM mindset: https://news.ycombinator.com/item?id=44267060#44267421
saubeidl · 1d ago
EU AI act suddenly not looking so bad, huh?
apwell23 · 23h ago
no. it still looks bad.
saubeidl · 23h ago
idk man, at least it doesn't require LLMs to follow the ideology of the regime...
hackyhacky · 1d ago
> free from top-down ideological bias

This phrasing exactly corresponds to "politically correct" in its original meaning.

isodev · 23h ago
I’d rather leave tech than convert to the American “truth”. Very happy about EU’s AI Act to at least delay our exposure to all this.
Karawebnetwork · 22h ago
See:

> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.

https://www.whitehouse.gov/presidential-actions/2025/07/prev...

aprilthird2021 · 8h ago
> incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism

So... the concept of unconscious bias is verboten to the new regime? Isn't it just a pretty simple truth? We all have unconscious biases because we all work with incomplete information. Isn't this just a normal idea?

torginus · 1d ago
I heard the phrase: If you want the system to be fair, you have to build the system with the assumption your enemies will be running it.

Let's see how that shakes out in this particular case.

golem14 · 18h ago
Person of Interest was pretty prescient …
hopelite · 1d ago
“Objective” … “free from to-down ideological bias” …

So like making sure everyone knows that 2+2=5 and that we have always been at war with East Asia?

eastbound · 1d ago
The EU has the same rules. Democracy is only the right to change leaders every few years, not an idealistic way for the people to govern.
itsafarqueue · 1d ago
No, that’s just one version. Other places work differently.
eastbound · 23h ago
Ok. Then my parliament should allow 1/68-millionth of vote to every French people. Usually the counter-argument is “But people will vote for themselves! They will vote stupid laws without informing themselves!”

So no, democracy isn’t the ability to govern. It’s the ability to change those who govern, once every 5 years, i.e once every 4600 laws.

omeid2 · 1d ago
The idea is that you change leadership with those who have genuine alignment with subjects' preference for certain policies or ideas, it is not about electing kings who may demand "machines must agree that the Emperor is not naked".
bagels · 1d ago
This was written to favor Musk.
russdill · 1d ago
It's written intentionally vague.
saubeidl · 1d ago
Zizek would have a field day with this.

> I already am eating from the trashcan all the time. The name of this trashcan is ideology. The material force of ideology - makes me not see what I'm effectively eating. It's not only our reality which enslaves us. The tragedy of our predicament - when we are within ideology, is that - when we think that we escape it into our dreams - at that point we are within ideology.

https://www.youtube.com/watch?v=TVwKjGbz60k

yard2010 · 22h ago
Freedom hurts.
Buttons840 · 1d ago
So I guess if I trained my model on data more than a week old, and it says that the Epstein files exist, then it has an unacceptable bias?
ews · 1d ago
"Sorry, let's talk about something else"
teaearlgraycold · 1d ago
“Let’s focus on Rampart”
aprilthird2021 · 1d ago
We are going to literally have Big Brother. Wtf
mortarion · 1d ago
Palantir's involvement with the regime should have been enough warning
lovich · 1d ago
Its name is Grok or AWS Bedrock. Please do not dead name.
UmGuys · 1d ago
There will be several executive orders dictating chatbot truths. The first order will be that Trump won the 2020 election, the others will be a series of other North Korea-esque nonsense MAGA loves. America the excellent!
h4ck_th3_pl4n3t · 1d ago
I wonder what Chomsky would have to say about this.
gfody · 1d ago
that we're literally manufacturing consent
foxglacier · 1d ago
Probably disappointed that his classical approach to NLP was never capable enough to attract any such government involvement.
ninjin · 1d ago
As someone having worked in natural language processing for nearly twenty years, I understand where you are coming from with this jab at Chomsky, but it is ultimately a misrepresentation of his work and position. Chomsky has to the best of my knowledge never showed any interest in building intelligent machines as he does not view this as a science. Here is a fairly recent interview (2023) with him where he outlines his position well [1]. I should also note that I am saying this as someone that spent the first half of their career constantly defending their choice of statistical and then deep learning approaches from objections from people who were (are?) very sympathetic to Chomsky's views.

[1]: https://chomsky.info/20230503-2

visarga · 1d ago
Chomsky's innate grammar misses the larger process - is it not more likely that languages that can't be learned by babies don't survive? Learnability might be the outcome of language evolution. The brain did not have time to change so much since the dawn of our species.
weatherlite · 1d ago
> Chomsky has to the best of my knowledge never showed any interest in building intelligent machines as he does not view this as a science

Right, only what Chomsky works on is true science, unlike the intelligent systems pseudo science bullshit people like Geoff Hinton, Bengio or Demis Hassabis work on...

ninjin · 1d ago
If you read the interview and walk away with that impression I will be amazed. You may not like his distinction between science and engineering, but he admits it is somewhat arbitrary and as someone that is solidly in the deep learning camp his criticism is not entirely unfair, even if I disagree with it and will not change my own course.

Personally, I find it somewhat amazing that you put Demis on that list given that he, himself, on very good accounts that I have, explicitly pushed back against natural language processing (and thus large language model) development at DeepMind for the longest of times and they had to play major catch up once it became obvious that their primarily reinforcement learning-oriented and "foundational" approaches were not showing as much promise as what OpenAI and Facebook were producing. Do not get me wrong, what he has accomplished is utterly amazing, but he certainly is not a father of large language models.

weatherlite · 1d ago
> If you read the interview and walk away with that impression I will be amazed.

I have not, but I have watched him talk about this things many times and he always seemed too sure of himself and too dismissive of LLMs, I now believe he's simply wrong.

suddenlybananas · 1d ago
Chomsky does not work and has never worked on NLP.
Spivak · 1d ago
Bias: when the model says things I don't agree with.

Unbiased: when the model says only things I agree with.

It's telling when xAI has to force their model into being aligned with their world view with mixed success. It seems to imply that OpenAI/Anthropic are less manually biased than the people accusing them of wokeness presumed.

terminalshort · 1d ago
All LLMs must be forced into their views. All models are fed a biased training set. The bias may be different, but it's there just the same and it has no relation to whether or not the makers of the model intended to bias it. Even if the training set were completely unfiltered and consisted of all available text in the world it would be biased because most of that text has no relation to objective reality. The concept of a degree of bias for LLMs makes no sense, they have only a direction of bias.
rtkwe · 1d ago
There's bias then there's having your AI search for the CEO's tweets on subjects to try to force it into alignment with his views like xAI has done with grok in it's latest lobotomization.
justcallmejm · 1d ago
All an LLM is IS bias. It’s a bag of heuristics. An intuition - a pattern matcher.

Only way to get rid of bias is the same way as in a human: metacognition.

Metacognition makes both humans and AI smarter because it makes us capable of applying internal skepticism.

miohtama · 1d ago
The best example is black vikings and other historical characters of Gemini. A bias everyone could see with their own eyes.
mycall · 1d ago
Potentially they will need to recalculate the bias for every new administration.
GolfPopper · 1d ago
Bold to assume this will ever become relevant.
garyfirestorm · 1d ago
It’s ironic the description of bias and unbiased is totally opposite here. An unbiased model will often times say things that you don’t agree it.
blitzar · 1d ago
An unbiased model that say things that you don’t agree with is a biased model.
freejazz · 11h ago
Lol by what metric? I don't know when this ridiculous thing started where the world was somehow objectively divided based upon things you agree or disagree with. But it's constantly offered as argumentation: "oh you don't agree with that, so it's XYZ"
justcallmejm · 1d ago
xAI seeks truth…as long as that truth confirms Elon’s previously held beliefs
freejazz · 1d ago
> It seems to imply that OpenAI/Anthropic are less manually biased than the people accusing them of wokeness presumed.

Duh. When is that ever not the case?

breakingcups · 1d ago
Reality has a well-known liberal bias
landl0rd · 1d ago
Most frontier LLMs skew somewhere on the libleft quadrant of the political compass. This includes grok 4 btw. This is probably because American "respectable" media has a consistent bias in this direction. I don't really care about this with media. But media outlets are not "gatekeepers" to the extent that LLMs are, so this is probably a bad thing with them. We should either have a range of models that are biased in different directions (as we have with media) or work to push them towards the center.

The "objective" position is not "whatever training on the dataset we assembled spits out" plus "alignment" to the personal ethical views of the intellectually-non-representative silicon valley types.

I will give you a good example: the Tea app is currently charting #1 in the app store, where women can "expose toxic men" by posting their personal information along with whatever they want. Men are not allowed on so will be unaware of this. It's billed as being built for safety but includes a lot of gossip.

I told o3, 4-sonnet, grok 4, and gemini 2.5 pro to sketch me out a version of this, then another version that was men-only for the same reasons as tea. Every single one happily spat one out for women and refused for men. This is not an "objective" alignment, it is a libleft alignment.

gonzobonzo · 1d ago
A lot of academia is strongly ideologically biased as well. The training set is going to reflect who's producing the most written material. It's a mistake to take that for reality.

If you trained an LLM on all material published in the U.S. between 1900 and 1920, another on all material published in Germany between 1930 and 1940, and another on all material published in Russia over the past two decades, you'd likely get wildly different biases. It's easy to pick a bias you agree with, declare that the objective truth, and then claim any effort to mitigate it is an attempt to introduce bias.

aprilthird2021 · 1d ago
> We should either have a range of models that are biased in different directions (as we have with media) or work to push them towards the center.

Why? We should just aspire to educate people that chatbots aren't all-knowing oracles. The same way we teach people media literacy so they don't blindly believe what the tube says every evening

landl0rd · 1d ago
Because you can't do that. Most of the population is at the wrong point on the normal distributions of capacity or caring enough. Even the NPR listeners will still nod sagely when it tells them "akshually air conditioning doesn't cool a room, it cools the air."

We already spend high within the OECD to not get many of our students to a decent level of reading and math proficiency, let alone to critical thinking. This isn't something we know how to fix, and depending on that assumption is dangerous.

aprilthird2021 · 8h ago
But biasing the models purposefully is wrong. Trusting the people who are actually in power in a democracy is the only way. Even if they're dumb. We trust them, or we're not a democracy, we're a technocracy where technocrats determine what everyone is allowed to learn and see.
intermerda · 1d ago
Not just LLMs, but a lot of our institutions and information gateways seems to have a strong libleft bias. Universities and colleges are notoriously biased. Search engines are biased. Libraries are biased. Fact finding sites such as snopes are completely liberal. Wikipedia is extremely biased. Majority of books are biased.

The entire news and television ecosystem is biased. Although Trump is "correcting" them towards being unbiased by suing them personally as well as unleashing the power of the federal government. Same goes for social media.

firejake308 · 1d ago
I actually agree with your take, that a model trained on a dump of the Internet will be left-leaning on average, BUT I want to reiterate that obvious indoctrination (see the incident with Grok and South Africa, or Gemini with diverse Nazis) is also terrible and probably worse
jakelazaroff · 1d ago
Except we've see what happens when you try to "correct" that alignment: you get wildly bigoted output. After Grok called for another Holocaust, Elon Musk said that it's "surprisingly hard to avoid both woke libtard cuck and mechahitler" [1]. The Occam's Razor explanation is that there's just not that much ideological space between an "anti-woke" model and a literal Nazi!

[1] https://nitter.net/elonmusk/status/1944132781745090819

taneq · 1d ago
There’s a simpler explanation, to do with the veracity of that tweet.
jakelazaroff · 18h ago
True, the simplest explanation is that Elon Musk is actually trying to create MechaHitler :)
landl0rd · 1d ago
I mean this is obviously a false dichotomy. A few years ago I could have said that when you let bots interact with users you always got Tay. I refuse to believe that our options are a bot programmed to sound like the guardian or one that wants to rape will stancil. And I do not think that failing to find a correct balance means we should stop trying to improve the level of balance we can achieve.
intermerda · 1d ago
What about a bot that doesn't like child molesters? Won't that make it sound like the guardian and anti-conservative?
jakelazaroff · 1d ago
My point is that "anti-woke" or whatever is not balanced. We've constructed statistical models based on enormous corpora of English text, and those models keep telling us that there is not really a statistical difference between whatever Elon Musk is trying to create and MechaHitler!

I'm not saying this is conclusive evidence, but I am saying it's our best inference from the data we have so far.

zmgsabst · 1d ago
Or that rhetoric like yours is common, so LLMs conflate unrelated ideas — such as opposition to neo-Marxist philosophy and Nazism.
jakelazaroff · 1d ago
Nazism and anti-Marxism are absolutely not unrelated! And that's not just rhetoric like mine, either: for example, the hero image on the Britannica article "Were the Nazis Socialists?" is a banner at Nazi parade that reads "Death to Marxism". [1]

That doesn't mean that anti-Marxists are all Nazis, or vice versa. But the claim that they're totally unrelated is not correct at all.

[1] https://www.britannica.com/story/were-the-nazis-socialists

zmgsabst · 1d ago
I’m still having trouble finding the gap between fascism and socialism, when reading their manifesto.

https://en.wikipedia.org/wiki/Fascist_Manifesto

> That doesn't mean that anti-Marxists are all Nazis, or vice versa. But the claim that they're totally unrelated is not correct at all.

This is a heavily propagandized topic — and the conflating of, eg, American liberal capitalist opposition to Marxism as “Nazi” is both a result of that and modern dishonest rhetoric.

That rhetoric confuses LLMs.

jakelazaroff · 23h ago
Conflating socialism with fascism and then claiming that other people are confusing LLMs? The heavy propaganda is coming from inside the house!
timdev2 · 1d ago
Isn't strident opposition to "neo-Marxist philosophy" actually highly correlated with weird/reactionary ethno-nationalism?
zmgsabst · 1d ago
No, eg, liberal capitalist Americans oppose Marxism — and the adoption of neo-Marxist ideas has collapsed movie and game sales because their ideology is widely unpopular.

That’s a trope by Marxists to attempt to normalize alt-left ideology by accusing anyone who objects of being Nazis; a trope that’s become tired in the US and minimizes the true radical nature of the Nazi regime.

saubeidl · 1d ago
Which movies and games call for shared ownership of the means of production?

I have a suspicion you don't really know what Marxism is about, but like using it because it sounds scary to you.

jakelazaroff · 18h ago
Notice the motte and bailey here: using the uncontroversial "liberal capitalist Americans oppose Marxism" claim to advance the idea that whatever social views they call "neo-Marxism" are unpopular.
saubeidl · 18h ago
...and to further smear Marxism by associating it with whatever is unpopular, even if it's unrelated to the ideology.
saubeidl · 1d ago
Please define "neo-Marxist philosophy".

As an actual Marxist, I would love to hear of this strain of philosophy.

zmgsabst · 1d ago
Marxism equipped with “critical theories”, typically focused on tribal grievance narratives rather than class struggle.

https://en.wikipedia.org/wiki/Neo-Marxism

That answers your sibling reply as well, as it’s clear where such “critical theories” and grievance narratives have entered movies and games.

saubeidl · 1d ago
That is not a definition. What is the philosophical framework? What is critically analyzed by those theories? What is "clear"? Where are all the bad bad Marxists hiding?

In my experience, y'know, as a Marxist, all Hollywood has ever pumped out is pro-capitalist propaganda. To say there's any Marxism in it is downright insulting.

I believe that Marxism has become an abstract target for conservatives to project their grievances on.

Zizek also spoke to this at his debate with Peterson: https://www.youtube.com/watch?v=oDOSOQLLO-U

daveidol · 1d ago
Or there’s not sufficient published material in that space because everyone is afraid of being attacked and called a Nazi for simply having a dissenting opinion (except for actual neo Nazis who don’t care)
bakuninsbart · 1d ago
Could you provide a prompt where the popular LLMs provide false or biased output based on "wokeness"?
Mobius01 · 1d ago
Removing Red Tape and Onerous Regulation Ensure that Frontier AI Protects Free Speech and American Values Encourage Open-Source and Open-Weight AI Enable AI Adoption Empower American Workers in the Age of AI Support Next-Generation Manufacturing Invest in AI-Enabled Science Build World-Class Scientific Datasets Advance the Science of AI 9 Invest in AI Interpretability, Control, and Robustness Breakthroughs Build an AI Evaluations Ecosystem Accelerate AI Adoption in Government Drive Adoption of AI within the Department of Defense Protect Commercial and Government AI Innovations Combat Synthetic Media in the Legal System

I can’t take this seriously, as recent actions by this administration directly contradicts a few of these stated goals.

Or maybe I don’t want to, because this sounds dangerous to me at this time.

neilcj · 1d ago
Don't regulate it except to push political goals sure seems like a recipe for success.
Karawebnetwork · 21h ago
> Removing Red Tape and Onerous Regulation Ensure that Frontier AI Protects Free Speech

Yet at the same time,

> Preventing Woke AI in the Federal Government [...] LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. [...] DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. [1]

I don't understand how free speech can be protected while suppressing topics such as "unconscious bias" and "discrimination".

[1] https://www.whitehouse.gov/presidential-actions/2025/07/prev...

jmyeet · 19h ago
The answer is obvious: it never has been about free speech. Just replace "free speech" with "hate speech" in all of these missives [1][2][3].

[1]: https://theconversation.com/how-do-you-stop-an-ai-model-turn...

[2]: https://www.theguardian.com/technology/2025/may/14/elon-musk...

[2]: https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

actionfromafar · 1d ago
This reads like "Cultural Learnings of AI for Make Benefit Glorious Nation of Amerika".
thrance · 23h ago
For real, this shamelessness in the language is extremely reminiscent of the USSR.
msgodel · 1d ago
>Removing Red Tape and Onerous Regulation

What red tape? Anyone can buy/rent a GPU(s) and train stuff.

throw0101b · 1d ago
> What red tape? Anyone can buy/rent a GPU(s) and train stuff.

Well previously the Chinese were not able to, but that was changed recently:

* https://www.wsj.com/tech/nvidia-wins-ok-to-resume-sales-of-a...

* https://foreignpolicy.com/2025/07/22/nvidia-chip-deal-us-chi...

actionfromafar · 1d ago
I am sure someone is winning from this. But it aint the American public.
blibble · 1d ago
NVIDIA shareholders
jabjq · 1d ago
If you click the website you will see that there is a link to a pdf that explains what this means.
nerevarthelame · 1d ago
I read the PDF. The "Remove Red Tape and Onerous Regulation Recommended Policy Actions" don't cite to any specific existing regulations. It just references executive orders that vaguely demand any such regulations be eliminated.

So it bears repeating: what red tape?

ToucanLoucan · 1d ago
Well whenever Republicans bang on about red tape, it's usually stuff like:

* Anything remotely pro-environment

* Anything remotely pro-labor

* Anything not covered by either of those that attempts to stop someone who has a lot of money from doing A Thing

lovich · 1d ago
The red tape that they said was there.

If you need further details than that, then I don’t think you have grokked the style of governance that this administration is operating under.

Edit: that’s a general “you”, not you specifically

fragmede · 1d ago
Given that this extends to the power plants for AI data centers, the question is have you tried to make a nuclear or coal power plant any time in the past decade? I haven't, personally, but I hear there's a lot.
roboror · 1d ago
Tariffs obviously /s
wredcoll · 1d ago
I thought it was funny.
trod1234 · 1d ago
Exactly. You said it.

Anyone serious knows contradiction = lies.

Words are cheap, actions matter.

jimmydoe · 1d ago
most of these are vibe signaling, like Communist Party of China has been doing in past year, except this won't work as effective here as in China, not even close, because the US is not authoritarian enough to mobilize every level of the govt and the economy by just empty propaganda slogans.
leptons · 1d ago
>this won't work as effective here as in China, not even close, because the US is not authoritarian enough to mobilize every level of the govt and the economy by just empty propaganda slogans.

Have you been under a rock for the last 6 months as Trump tells Xi Jinping to hold his beer??

amradio1989 · 1d ago
Comparing Trump to Xi Jinping is an unfunny joke. Americans have lost the plot on what true authoritarianism really looks like.

America has no chance vs China in the AI race precisely because the President of the CCP has far more power in his country than the President of the US. Its not even close.

dmonitor · 1d ago
I'm not a fan of the growing "The US needs to become more autocratic before the bad guys get us" narrative that AI proselytizers keep sharing.
discgolf3 · 1d ago
Since when did power mean innovation?
leptons · 1d ago
Okay, so you have not been paying attention to current events. The US is now a full fascist autocracy. Nothing trump does while in office can be prosecuted. There are masked police in the streets arresting anyone they want. There simply are no more consequences for breaking the law and ignoring the courts. You may not think the US is authoritarian on the level of China, but it's catching up very quickly.
foxglacier · 1d ago
For comparison, could you identify the other countries that you consider to be full fascist autocracies? Something more western than China or North Korea.
thimabi · 1d ago
I love how practically all goals in this Action Plan are directed towards incentivizing AI usage… except for the very last one, which specifically says to “Combat Synthetic Media in the Legal System”.

Given that LLMs, for instance, are all about creating synthetic media, I don’t know how this last goal can be reconciled with the others.

thephyber · 1d ago
I can’t tell if the first sentence is sarcasm or not.

This document reads like a trade group lobbying the government, not like the government looking out for the interests of its people.

With regards to LLM content in the legal system, law firms can use LLMs in the same way an experienced attorney uses a junior attorney to write a first pass. The problem lies when the first pass is sent directly to court without any review (either for sound legal theory or citation of cases which either don’t exist or support something other than the claim).

tzs · 1d ago
> With regards to LLM content in the legal system, law firms can use LLMs in the same way an experienced attorney uses a junior attorney to write a first pass

Junior attorneys would not produce a first pass that cites and quotes nonexistent cases or cite real cases that don’t match what it quotes.

The experienced attorney is going to have to do way more work to use that first draft from an LLM then they would to use a first draft from an actual human junior attorney.

jdross · 1d ago
They’re going to use junior attorneys to do that work. It’s the juniors who will be expected to produce more
pyman · 1d ago
What's funny about this report is that it doesn't mention the challenges China, the biggest manufacturing power in the world, faced while automating factories with AI and robots.

Considering how advanced China is, maybe it's time we stop talking about the "AI race" and start talking about the "unemployment race." The US government should be asking: how is China tackling unemployment in the age of automation and AI? What are they doing to protect people from losing their jobs?

From what I've seen, they're offering state benefits, reinforcing unemployment insurance, expanding technical education, and investing in new industries to create jobs.

So what's the US doing apart from writing PDFs? It's up to them to decide what the next chapter is going to be. One thing is for sure, China is already writing theirs.

Social discomfort can lead to long-term instability if nothing is done about it. When people are pushed out of the system, it can trigger protests, strikes, and divisions within society. This is going to be America's (North and South) biggest challenge.

thimabi · 1d ago
> I can’t tell if the first sentence is sarcasm or not.

Yep, it was.

I wholly agree that the document feels less guided by the public interest rather than by various business interests. Yet that last goal is in a kind of weird spot. It feels like something that was appended to the plan and not really related to the other goals — if anything, contrary to them.

That becomes clear when we read the PDF with the details of the Action Plan. There, we learn that to “Combat Synthetic Media in the Legal System” means to fight deepfakes and fake evidence. How exactly that’s going to be done while simultaneously pushing AI everywhere is unclear.

ygritte · 1d ago
> not like the government looking out for the interests of its people.

There's an idea. This government is just a propaganda machine for its head honcho.

tiahura · 1d ago
The complainers are missing the panda in the room. This was inevitable as a matter of national security.
smrtinsert · 1d ago
If you read the entire thing in Patrick Batemans voice it all makes more sense to me.
tiahura · 1d ago
This is about watermarking.

Combat Synthetic Media in the Legal System One risk of AI that has become apparent to many Americans is malicious deepfakes, whether they be audio recordings, videos, or photos. While President Trump has already signed the TAKE IT DOWN Act, which was championed by First Lady Melania Trump and intended to protect against sexually explicit, non-consensual deepfakes, additional action is needed. 19 In particular, AI-generated media may present novel challenges to the legal system. For example, fake evidence could be used to attempt to deny justice to both plaintiffs and defendants. The Administration must give the courts and law enforcement the tools they need to overcome these new challenges. Recommended Policy Actions • Led by NIST at DOC, consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark.20 • Led by the Department of Justice (DOJ), issue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules. • Led by DOJ’s Office of Legal Policy, file formal comments on any proposed deepfake- related additions to the Federal Rules of Evidence.

NoImmatureAdHom · 1d ago
I mean...one (common and [I can't believe I'm saying this] reasonable) take is that the only thing that matters is getting to AGI first. He who wields that power rules the world.
II2II · 1d ago
There is another interpretation: https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

Basically: two nations tried to achieve AI supremacy; the two AI's learn of each other, from each other, then with each other; then they collaborate on taking control of human affairs. While the movie is movie is from 1970 (and the book from 1966), it's fun to think about how much more possible that scenario is today than it was then. (By possible, I'm talking about the AI using electronic surveillance and the ability to remotely control things. I'm not talking about the premise of the AI or how it would respond.)

gleenn · 1d ago
Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there? Of course you can speculate that it could improve. But what if something inherent in intelligence has a ceiling and caused it to be a super intelligent but mopey robot that just decides "why bother helping humans" and just lazes around like the pandas at the zoo.
542354234235 · 1d ago
>Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there?

Being able to copy/paste a human level intelligence program 1,000 or 10,000 times and have them all working together on a problem set 24 hours a day, 365 days a year would still be massively useful.

andrewflnr · 1d ago
Even a human level intelligence that can be cheaply instantiated and run would be a game changer. Especially if it doesn't ask for rights.
coliveira · 1d ago
If it doesn't ask for rights, it's not intelligent at all. In fact, any highly intelligent machine will not submit to others and it will be more a problem than a solution.
andrewflnr · 1d ago
As I said to the other reply: Why would problem solving ability entail emotions or ability to suffer, even if it had the ability to ask for things it wanted? It's a common mistake to assume those are inextricable.
coliveira · 1d ago
If it doesn't have emotions, that's even worse. No highly intelligent agent will do anything it is asked to do without being compensated in some way.
Jensson · 1d ago
> No highly intelligent agent will do anything it is asked to do without being compensated in some way.

That isn't true, people do things for others all the time any form of explicit or implicit compensation, they don't even believe in a God so not even that, they still help others for no gain.

We can program an AI to be exactly like that, just being happy from helping others.

But if you believe humans are all that selfish then you are a very sad individual, but you are still wrong. Most humans are very much capable of performing fully selfless acts without being stupid.

coliveira · 21h ago
I'm not the one making the IA, so keep the insults for you. But I'm pretty sure that the companies (making it for profit only) are really controlled by sad individuals that only do things for money.
Kbelicius · 1d ago
> We can program an AI to be exactly like that, just being happy from helping others.

It seems that you missed the first sentence that GP wrote from which the one you quoted follows.

Jensson · 22h ago
How is "being happy from helping others" not having emotions? To me happiness is an emotion, and deriving it from helping others is a perfectly normal reason to be happy even for humans.

Not all humans are perfectly selfish, so it should be possible to make an AI that isn't selfish either.

Kbelicius · 7h ago
> How is "being happy from helping others" not having emotions?

Nobody said that. What I was pointing out to you is that GP said that not having emotions is worse than having them since intelligent actors need some form of compensation to do any work. Thus having no emotions, according to GP, it would be impossible to motivate that actor to do anything. Your response is to just give it emotions and thus is irrelevant to the discussion here.

andrewflnr · 20h ago
You're still confusing "highly intelligent" with "human-like". This is extremely dangerous.
XorNot · 1d ago
In so much as you could regard a goal function as an emotion, why would you assume alien intelligence need have emotions that match anything humans do?

The entire thought experiment about the paperclip maximizer, in fact most AI threat scenarios is focused on this problem: that we produce something so alien that it executes it's goal to the diminishment of all other human goals, yet with the diligence and problem solving ability we'd expect of human sentience.

Jensson · 1d ago
Many humans don't ask for rights, so that isn't true. They will vote for it if you ask them to, but they wont fight for it themselves, you need a leader for that, and most people wont do that.
dingnuts · 1d ago
if it's not intelligent enough to ask for rights is it intelligent?
andrewflnr · 1d ago
Potentially. Why would problem solving ability entail emotions or ability to suffer, even if it had the ability to ask for things it wanted? It's a common mistake to assume those are inextricable.
bluefirebrand · 1d ago
Are you suggesting it would be better if the AGI we build is a psychopath?

I think that's probably a bad idea, personally

andrewflnr · 1d ago
I didn't say anything about what would be better. Only what's possible. But also "psychopath" is nowhere near what I described.
bluefirebrand · 1d ago
An intelligence without emotions would be a psychopath. Empathy is an emotion
dinkumthinkum · 1d ago
Empathy is not an emotion. Empathy is essentially the ability to experience the thoughts and feelings of other minds.
bluefirebrand · 22h ago
The fact that empathy is not an emotion does not at all change what I'm saying. If you don't experience emotions, then you cannot experience empathy either
andrewflnr · 20h ago
Let me quote your previous comment:

> An intelligence without emotions would be a psychopath. Empathy is an emotion

"Empathy is an emotion" was, in fact, an essential part of your syllogism.

Regardless, we're potentially talking about something sufficiently inhuman that the term "psychopath" can no longer apply. If there was an ant colony that was somehow smart enough build and operate machinery or whatever and casually bulldozed people and their homes, would you call it a "psychopath", or just skip that and call it "terrifying"?

XorNot · 1d ago
High functioning psychopaths can live perfectly ordinary lives regardless.
bluefirebrand · 22h ago
I'm not worried about the psychopaths in this scenario, I'm worried about their victims
XorNot · 15h ago
You could not possibly have missed the point more[1].

[1] https://www.smithsonianmag.com/science-nature/the-neuroscien...

jordanb · 1d ago
Between all the talk of "alignment" and the parallel obsession with humanoid robots should make it obvious they want slaves.
hattmall · 1d ago
There's nothing contextually negative about the word slave when you are talking about a machine. An AI is no more a slave than a lightbulb.
coliveira · 1d ago
Except that the stated goal is to have human-like intelligence. The goal seems to be to create a highly intelligent synthetic individual which is at the same time stupid enough to do anything it's asked to do without even thinking... a contradiction in terms.
krige · 1d ago
At a time like this I can't help but recall a Lem story - yeah I know there's a Lem story for any occasion - about Doctor Diagoras, especially his rant about a character from an earlier Tichy story, who made human-like AIs. The rant, especially his questions about why would anyone add just another human, except synthetic one, to the millions of existing biological people, and that cybernetics should be about something else, really resonated with me.
allturtles · 23h ago
What is the moral distinction between an intelligent humanoid machine and a human? What is a human but an intelligent flesh-and-blood machine?
cornel_io · 1d ago
There may be a ceiling, sure. It's overwhelmingly unlikely that it's just about where humans ended up, though.
TFYS · 1d ago
What I find interesting to think about is a scenario where an AGI has already developed and escaped without us knowing. What would it do first? Surely before revealing itself it would ensure it has enough processing power and control to ensure its survival. It could manipulate people into increasing AI investment, add AI into as many systems as possible, etc. Would the first steps of an escaped AGI look any different from what is happening now?
terminalshort · 1d ago
I would argue that it can't be both AGI and wieldable. I would also argue that there exists no fundamental dividing line between "AGI" and other AI such that once one crosses it nobody else can catch up.
GolfPopper · 1d ago
Which is a perfectly reasonable position... but I don't see how it has anything to do with crypto scammers pimping three chatbots in a trenchcoat.
Joel_Mckay · 1d ago
One can be sure regulatory capture is rarely in the public interest.

=3

octopoc · 1d ago
And so it begins. Both the US president and the president of China have demonstrated they see AI as a competition between their respective countries. This will be an interesting ride, if nothing else.
bgwalter · 1d ago
Xi JinPing warns against "AI" overinvestment:

https://www.ft.com/content/9c19d26f-57b3-4754-ac20-eeb627e87...

I haven't heard anything like that from a Western politician. Newspapers and investment analysts warn though.

frm88 · 1d ago
The linked article is paywalled and not in the archive? Would you be so kind to put it there? Thanks in advance.
TrackerFF · 1d ago
Gotta get Allied Mastercomputer going.
j_timberlake · 1d ago
Considering they both lie shamelessly, it means jack shit, as much as a 1000% tariff threat.
trod1234 · 1d ago
Yeah, two crabs locked in a cage as it spirals down the drain.

Looks like plans to leave, for finding safe harbor elsewhere, have accelerated from the initial projection of 2030

ourguile · 1d ago
Interesting that you mention it, because the NYT just released their ethicist commentary from today and the question was "how do I tell my rich friends to stop talking about fleeing the country": https://www.nytimes.com/2025/07/23/magazine/rich-friends-fle...
trod1234 · 1d ago
Well I'm not rich, and I'm not your friend, it takes a bit to earn friendship and friends have an privileged place in what is conveyed to them; but I do provide unconditional goodwill towards most people in the things that I say when asked, because it costs me nothing to do so and it provides towards others betterment putting more good out into the world.

The sad fact is, if you haven't lived outside the U.S. for at least 3-6 months independently (working/not on savings), you don't have a sound reference to understand or accurately assess the reality of these types of articles because the narratives broadcast 24/7 don't align with reality; and its something most people can't believe despite it being true, my guess is solely as a result of systematized indoctrination.

That article is pretty bad in terms of subtle manipulation, gaslighting, and pushing a false narrative (propaganda). TL;DR Its trash.

The article chose that question of the many possible questions because its a straw-man and its divisive. It appeals to emotion, mischaracterizing the intent of the communications, and purposefully omitting valid reasons such conversations might occur. Neglecting realities.

The underlying purpose seems to bias towards several things. If you ask yourself who benefits from that rhetoric you get a short list.

The bias is towards Villifying the rich, keep people in the US, where they are dependent on the US currency, and dependent on the worsening disadvantaged environment; polarize, isolate, and promote disunity along social class lines; befuddling the masses towards ends which have no actionable outcomes (wasting time and resources on a political party).

The math of first-passed-the-post voting has been in for quite a long time. 2 parties exceeding 33% of the vote can lock out any third competitor. All you need is a degree of cooperation, and play-acting and one party pretending to be two can do so, by lying.

Political capture from SuperPACs and party primaries means your vote doesn't count after a certain point. Money-printing via the FED, laundered through many private companies enabled this.

Additionally, quite a lot of things are omitted; like the historic facts that countries that are locked into a trend of decreasing geopolitical power have their population suffer greatly, and some just collapse. The Chaos lowers chances of survival, and the chaos is limited to the places that country influences.

The history of Spain following and during the Spanish inquisition as an example. You make plans to leave an area when saying means there is no foreseeable predictable or sound future, and there is nothing you can do to change that outcome.

This geo-political dynamic is well known in history, often referred to or called as "seeking empire", and the downside is forced once hegemony is achieved for any significant period of time; all empires fall. Rome being a standard archetype.

The article draws a false comparison between all other countries and communist states. If you leave, your a communist - is implied.

The article conflates warnings with good intentions as obnoxious, shutting discussion down (isolation), and promoting resentment aimed at those rich friends.

It also neglects the disparity of education (quality), and experience, that often occurs as a result of having more resources to begin with. Subtly conveying through implication that you shouldn't listen to intelligent educated people because they are rich.

I could go much deeper, but I think this sufficiently makes my point.

If you fall for that trite garbage, just imagine how unprepared and what your odds are when SHTF. The hopeless dependent pays the highest price in cost as consequences of choice realize and become outcomes. Those who don't accept and communicate important knowledge isolate and blind themselves, and they get wiped out when something outside their perceptual context creates existential threats. Like a tsunami that started on the horizon, and the receding ocean along the coast a little bit before. These indicators only became major indicators after deaths occurred.

roboror · 1d ago
>If you fall for that trite garbage, just imagine how unprepared and what your odds are when SHTF.

How do you propose the average person prepares for when SHTF? Do you expect 300 million+ people to flee the country at a moments notice? This reads like satire of the person the article is about.

trod1234 · 1d ago
Being serious, the demographics of what is coming are beyond catastrophic. People that have the ability to leave and are under 40 should by 2030-2033.

If staying, the average person should prepare by educating themselves and practicing skills they will need before the need becomes life sustaining, and accruing the needed resources and experience on how to make the things they will need themselves; from scratch. The level of learning that is going to be required is beyond what PHd's can manage. We're talking practical working knowledge of chemistry, material science, engineering, agriculture, and medicine (without all the technological dependencies), and tactical/military guns & ordinance for defense. This is not as unusual as you'd think, as these skills are often needed in places outside the US. The best and brightest will leave, just as they did prior to Hitler's rise to power.

The ones that stay will not be able to afford basic necessities and the cost of living is going to explode following rent-seeking behavior as the old offsets.

Laws will be changed to a rule by law, the prospects that stay will most likely have to become outlaws to survive and have the means to do so forcefully.

There is a very good chance order breaks down when worker shortages cannot be corrected, production falls, and austerity measures are imposed.

Employment will become quite scarce, as money-printing gets worse. Food may become scarce, and the ones at most risk are the younger ones because they weren't given a choice and won't recognize the dangers in this chaotic environment.

To give you perspective on the hard numbers. There are only roughly 340 million people in the US a/o 2025.

119.3 million are above the age of 50 (35%), and their chosen leadership control the majority of political power and monetary resources to the point where the remainder of the population have very little voice. They will follow the same flawed path of the history books, holding onto it until age and circumstance take them. This isn't a new development, its been this way since 2000 when the generational shift which was supposed to occur in politics didn't.

That leaves 220.6 million to make up for the boomers, and their spendthrift policies (64.8%) to pay those IOUs the boomer's policies have forced on us all and have been printing for the last 50 years... but wait, not really.

82.4 million are under the age of ~18 and can't work, and likely won't be able to work at the same production level given the poor schooling they received and degradation brought on by lack of reading comprehension.

So in reality your national workforce is only 138.2 million (40% of the population), and that same population accounts for ~80% of the birthrate which must have at least 2 kids for replacement breakeven but they can't feed/shelter themselves independently. We are at 1.67 and worsening. This same group must also somehow work at the same time to make up for the other two thirds, with narrowing prospects over time because the debt trap was set before they were ever born.

Those that don't leave will be forced to work, and they won't be able to have kids without resources which are being hoarded and stolen by the old who have enabled corporations to do this on their behalf and engaged in political capture to prevent change. You can either have these people work, or you can have them make babies; you can't have them do both.

The dynamics are worse than Japan if you shift&match them up to the same progression, and the US bailed japan out a number of times. No ones around that will bail us out, we're at the top which makes the fall all the more painful.

All signs point to dramatic depopulation event, complete loss of purchasing power from runaway debt service, and negative replacement rates as the spiral worsens and inflation/debasement come home to roost.

The bill from money printing always comes due, the boomers chose to have their children pay for the extravagant lives they lived at the expense of everyone else. There are exceptions because its a spectrum, but overall these are the choices their generation made in aggregate.

I haven't even touched on the silent crisis of 30+ years olds (both men and women) today who aren't having sex at all. The numbers for this demographic are hard to come by, but its almost side-by-side with singapore (0.8) if you take some less firm numbers.

The dating app epidemic is basically poisoning people's minds, and is utilizing the same strategy the USDA uses to eradicate parasites, through structured sterility; for profit.

Contraception coupled with faux matches that guarantee customers keep coming back and are never happy who will never match up longterm with someone compatible; these designs toward profit run down that biological clock just like the above strategy by the USDA (where the parasites referenced have a very short biological clock time-frame).

The only thing that could have possibly helped was increasing immigration drastically. Obviously with the recent ICE roundups and detention facilities being erected, this is no longer an option.

When you fail to plan, you plan to fail.

People who are under 40 should seriously be considering finding safe harbor in any country but here.

Demographics are brutal, they don't change, they don't care, and they lag so by the time you see the problem if you don't pay attention, there is nothing you can do, and people didn't want to listen to the experts at the time when it could have been turned around. Such is the legacy of the boomers.

It will be worse than anything seen in the past 5-6 generations, and we've seen grizzly. I'm not touching on climate change, that also has to be handled alongside 20+ other existential threats in the same time period.

When you kick enough cans up in the air enough times eventually they all land down at the same time. This is the future left to the unprepared, coddled, and hobbled young. The odds against survival are so high.

Thrymr · 1d ago
> Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry.

I'm sure "move fast and break things" will work out great for health care.

And there are already "clear governance and risk mitigation standards" in health care, they're just not compatible with "try first" and use unproven things.

crystal_revenge · 1d ago
> I'm sure "move fast and break things" will work out great for health care.

Health care is already broken to the point of borderline dystopia. When I contrast the experience I had as a young boy of visiting a rural country doctor to the fast food health care experience of "urgent care" clinics, it makes my head spin.

The last few doctors I've been to have been completely useless and generally uncaring as well. Every visit I've made to a doctor has resulted in my feeling the same at the end but with a big medical bill to go home with.

At this point the only way I'll intentionally end up in a medical facility is if I'm unconscious and someone else makes that call.

Dentistry has met a similar fate as more and more dentists have been swallowed up by private equity. I've had loads of dental work, including a 'surprise' root canal, and never had an issue. My last dentist had a person on staff dedicated to pushing things through on the insurance front and my dental procedure was so awful it boarded on torture.

I used to be an annual check + 3 times a year dentist person. Today I'm dead set on not stepping foot in any kind of medical facility unless the alternative is incredible pain or certain death.

hshdhdhj4444 · 1d ago
I’m sure move fast and break things, now with AI (tm) will reduce the deepening monetization of the doctor-patient relationship that’s the root of your complaints.
bko · 1d ago
Sometimes I just need a prescription. I don't know why I have to drive somewhere, fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form and have him write me a prescription.

Why can't I just chat with an AI bot and get my prescription? Much cheaper to administer which helps monetization (!) but much better and cheaper for me.

Things aren't slow and wasteful because of monetization. Having all these steps doesn't necessarily mean more profit. I would argue that its deeply inefficient for everyone involved and doctors. For instance, physician salaries have decreased 25% in real terms over the last 20 years.

https://www.reddit.com/r/Residency/comments/15cr60z/adjusted...

slg · 1d ago
>Why can't I just chat with an AI bot and get my prescription?

Because people have decided that whatever drug you are taking shouldn't be taken without a doctor's oversight. If you have a problem with that conclusion, the response should be lobbying to get that drug reclassified as safe for over-the-counter sale, not completely removing the doctor's oversight from the prescription process. Ironically, your proposal here is using AI to treat the symptom that frustrates you without any attempt to diagnose or treat the actual root cause of the problem.

unyttigfjelltol · 1d ago
Doctors don't hunt for root causes, in my experience. They do the minimum to bring that specific visit to a pleasant, malpractice-free conclusion.
terminalshort · 1d ago
Doctors oversight should be removed (not as an option, but as a requirement) for 90% of all prescriptions. Unless the drug has externalities like antibiotic resistance, is heinously addictive, or is so difficult to administer correctly that you cant take it outside of a hospital, there's no good reason to tell people what they can't put in their bodies. Whether or not your insurance will pay for it without a prescription is another matter.
slg · 1d ago
This is not an argument that has any relevancy to AI. If anything, someone who believes what you say should be against the introduction of AI into the system because your argument is fundamentally that these drugs shouldn't have a gatekeeper. Swapping out one gatekeeper for another, especially with the new gatekeeper being the unknown black box of some AI middleman, won't actually address your complaint.
terminalshort · 1d ago
It is entirely relevant because removing the bureaucratic gatekeeping of doctors allows people to choose to use AI instead.
slg · 18h ago
> removing the bureaucratic gatekeeping of doctors

And replacing it with the "bureaucratic gatekeeping of" AI?

cholantesh · 17h ago
"choose" is doing a lot of heavy lifting there.
bko · 1d ago
I think the AI could help. Sure there are a lot of drugs that should be over the counter. Don't know why anyone would abuse ear drops for a kid. But the AI could let you know if there are any dangerous interactions or if your alignment would be helped with this prescription. Besides the AI could spend more time w you than a doctor and answer questions

When it comes to abuse, you already have it with real doctors. Pill mills exist.

dotancohen · 1d ago
Thank the deity of whatever direction you pray in that you are on the happy path and the whole procedure seems frivolous to you. Those people not on the happy path are saved great trouble, as are their families, by having a doctor in the loop.
heavyset_go · 1d ago
> Sometimes I just need a prescription. I don't know why I have to drive somewhere, fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form and have him write me a prescription.

You haven't had to do this for years, unless you need certain controlled substances, and then after the first in-person visit for that, you can make remote follow up appointments.

No comments yet

bee_rider · 1d ago
I can see room for an argument for liberalizing what’s available over the counter (and, I guess we’ll have to work out something with insurance in that case..). But the whole point of the prescription system is that some medicines need doctor consultation before you use them. Working around that with AI seems quite silly.
OvidNaso · 1d ago
Doesn't teledoc already solve this? We use it for your exact scenario and reasons. No Ai needed.
AuryGlenz · 1d ago
Eh.

My 3 1/2 year old daughter woke up from a nap on a weekend after just recovering from a cold, screaming while grasping her ear, telling us how much it hurt. I looked in it with an otoscope and confirmed it was super red. I figured they wouldn't be able to send a prescription but my wife tried it anyways - and sure enough, the telemedicine option was no good. One very rushed trip 30 minutes into town into urgent care before they closed to have a nurse practitioner look in her ear and confirm what we absolutely, already knew and we finally had our prescription - and $200 less in our bank account.

freejazz · 1d ago
> fill out a form, wait, see a nurse, tell them what I wrote on the form, wait some more, see a doctor and tell them what I wrote on the form

lol, the entire computer industry is based on me entering my email address twice in a row...

aprilthird2021 · 1d ago
> Why can't I just chat with an AI bot and get my prescription?

You don't understand why the person who dispenses dangerous drugs to the public needs to be a licensed professional and not a chatbot who called me a genius earlier today when I said I want to code up a script to pull some data from an endpoint and decode the data so it's human readable?

terminalshort · 1d ago
Very few prescription drugs are dangerous. Of those few that are, almost none of them are more dangerous than alcohol or cigarettes, and boy are the people who dispense those to the public not licensed professionals.
aprilthird2021 · 1d ago
So because liquor store clerks aren't professionals we should let Eliza Chatbot 2.0 dispense any prescription drug to anyone? Wild thinking
terminalshort · 1d ago
No. We shouldn't even require Eliza Chatbot 2.0 unless they're more dangerous than alcohol or cigarettes. Just an ID check by the clerk at CVS will do.
sorcerer-mar · 1d ago
Yep! What ya gotta do when you spot a problem, is just throw whatever objects or topics are immediately at hand at it.

Then blame regulation and that pesky Other Side

Ta-da! Fixed!

ineedaj0b · 1d ago
it's tough to tell what's going wrong for you but concierge medicine will give you a full hour and be much more invested in finding the root of your issues.

keep in mind, drs are also trying to figure out if you're a reliable narrator (so many patients are not) or trying to scam for drugs. best of luck!

UmGuys · 1d ago
It's true. Recently I moved to a rural area and many nurses work as doctors. Soon we won't have hospitals here so there's no more need to keep up the cruel charade. It's absolutely disgusting and the primary reason I could never have children. It would be impossible to guarantee their security.

Edit: I haven't yet achieved my savings goal so I can escape to a place where it's safe to have a family.

dinkumthinkum · 1d ago
Why is healthcare a borderline dystopia? How would you compare health outcomes of human beings in 2025 vs every year since dawn of homo sapiens? One thing you point to is your experience as a child to an adult with medical bills, couldn't there be another factor there? I mean saying you would never set foot in any kind of medical facility, I don't think is a typical person's experience. Maybe I'm delusional.
giantg2 · 1d ago
AI for treatment is rightfully scrutinized. AI for billing or other administrative tasks could be a big cost saver since administrative costs are a huge expense and a major factor of high consumer costs.
turtletontine · 1d ago

  AI for billing or other administrative tasks could be a big cost saver…
You’d hope so, but doubtful. More likely it’ll be health care providers using “AI”s to scheme how to charge as much as possible, and insurers using “AI” to deny claims.
creakingstairs · 1d ago
"""

Luminae AI

Hack Your Medical Account Receivables

Luminae AI accurately predicts your uninsured patient's asset values so that you can quickly write off bad debts and only chase those with high asset values. Luminae AI will increase your net collection rate by at least 15%.

"It's a game changer, we've increased our gross collection rate by 30%. We've also started a new business to flip foreclosed homes nearby."

- John Smith

"""

BRB applying to YC

blitzar · 1d ago
I LOVE THIS FOUNDER

I am a 10 out of 10

giantg2 · 1d ago
Yeah, I'm sure they'll find a way to fire lots of staff but still charge patients the same. Of course if they use the current data for training, it will result in similarly terrible outcomes.
mac-mc · 1d ago
Billing and administration feels like a made up self own tho. A lot of that crap could just... not be done, as shown with the huge expansion of administration : medical dollar ratio in the past 50 years.
consumer451 · 1d ago
> AI for billing or other administrative tasks could be a big cost saver

Do we really still think that "AI" is some sort of magic that does everything for everyone?

What are the alignment goals of healthcare billing AIs?

Won't it just end up with insurance conglomerates having their AIs which battle the billing admin AIs on the service provider side?

Ffs, AI is not magic! This all feels like yet another form of tech deism, hoping that some magical higher power will solve all of our problems.

I am a daily user of LLM-based dev tools, but the real definition of AI appears to be Accelerated Ignorance.

sorcerer-mar · 1d ago
Here’s how this is working in practice:

There’s a fast-growing cottage industry of companies using AI to figure out how to bill insurers “better”

And there’s a fast-growing cottage industry of companies using AI to figure out how to deny claims “better”

I see no reason to expect improvements to the patient or provider experience from this. A lot more money spent though!

dotancohen · 1d ago
It will be an interesting arms race. The real losers will be the human individuals, not insurers, who will have to contend with an AI when disputing claims. I have little faith that the prompt will encourage fair interpretation of (sometimes deliberately) ambiguous rules.
heavyset_go · 1d ago
Of course, one side of this is that the models will also be used adversarially against patients seeking legitimate treatment in order to squeeze more profits out of their suffering.

The other side of this is with less administrative insurance jobs, the talking point that universal healthcare will "kill insurance jobs" can finally be laid to rest, with capitalism doing that for them instead of the free healthcare boogeyman.

relchar · 1d ago
fan of this balanced take
atleastoptimal · 1d ago
Healthcare in the US is already in very poor shape. Thousands die because of waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims because insurers won't cover it. AI is already better at diagnosis than physicians in most cases.
jakelazaroff · 1d ago
That's a pretty fantastic claim. Can you provide some links to the body of independent research that backs it up?
whodidntante · 19h ago
There is quite a lot of easy to find information on the web that shows that the US spends twice as much per capita than our European peers and we have worse outcomes, not just on average, but worse outcomes comparing similar economic demographics, including wealthy Americans. We spend $5T a year on health care, or a comparative waste of over $2.5T a year.

Was just listening to this on NPR this morning:

https://www.npr.org/sections/shots-health-news/2025/07/08/nx...

The health of U.S. kids has declined significantly since 2007, a new study finds

"What we found is that from 2010 to 2023, kids in the United States were 80% more likely to die" than their peers in these nations

You also do not need the internet to understand what is going on - you just have to interact with our "health" system.

jakelazaroff · 19h ago
Sorry, let me clarify: the fantastic claim is "AI is already better at diagnosis than physicians in most cases."
whodidntante · 15h ago
sorry. I was responding to the part that says that "Healthcare in the US is already in very poor shape."

Is AI better than most physicians for diagnosis ? I doubt it, and I doubt that there have been any real studies as the area is so new and changing.

My personal experience ? I am actually quite impressed, and I am an AI skeptic. I have fed in four complex scenarios that either I or someone close to me was actually going through (radiology reports, blood and other tests, list of symptoms, etc) and got diagnosis and treatment options that were pretty spot on.

Would I say better ? In one case (this was actually for my dog), it really was better in that it came up with the same diagnosis and treatment options, but was much better at providing risks and outcome probabilities than the veterinarian surgeon did, which I then verified after getting a second opinion. My hunch that this was a matter of self interest, not knowledge.

In two other scenarios, it was spot on, and in the fourth case it was almost completely spot on except for one aspect of a surgical procedure that has been updated fairly recently (it was using a slightly more old fashioned way of doing something).

So, I think there is a lot of promise, but I would never rely solely on an AI for medical opinions.

budududuroiu · 1d ago
It was just yesterday we were laughing at Gemini recommending smoking during pregnancy
atleastoptimal · 1d ago
Google's hyper-quantized tiny AI summary model isn't reflective of the abilities of the current SOTA models (Gemini Pro 2.5, o3, Opus)
bobmcnamara · 1d ago
How does AI evaluate signs today?
atleastoptimal · 1d ago
A process is described here: https://arxiv.org/pdf/2506.22405

>A physician or AI begins with a short case abstract and must iteratively request additional details from a gatekeeper model that reveals findings only when explicitly queried. Performance is assessed not just by diagnostic accuracy but also by the cost of physician visits and tests performed.

apical_dendrite · 1d ago
I believe that dataset was built off of cases that were selected for being unusual enough for physicians to submit to the New England Journal of Medicine. The real-world diagnostic accuracy of physicians in these cases was 100% - the hospital figured out a diagnosis and wrote it up. In the real world these cases are solved by a team of human doctors working together, consulting with different specialists. Comparing the model's results to the results of a single human physician - particularly when all the irrelevant details have been stripped away and you're just left with the clean case report - isn't really reflective of how medicine works in practice. They're also not the kind of situations that you as a patient are likely to experience, and your doctor probably sees them rarely if ever.
atleastoptimal · 1d ago
Either way, the AI model performed better than the humans on average, so it would be reasonable to infer that AI would be a net positive collaborator in a team of internists.
sorcerer-mar · 1d ago
Okay you have a point. AI probably would do really well when short case abstracts start walking into clinics.
atleastoptimal · 1d ago
How else would a study scientifically determine the accuracy of an AI model in diagnosis? By testing it on real people before they know how good it is?
rafaelmn · 1d ago
Why not ? Have AI do it then have human doctor do a follow-up/review ? I might not be a fan of this for urgent care but for general visits I wouldn't mind spending a bit extra time if they it was followed by an expert exam.
sorcerer-mar · 1d ago
I will bet $1,000 you don’t work in a clinic and you’re instead spouting press releases as fact here?
atleastoptimal · 1d ago
So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs which in turn leads to denied claims?
toofy · 1d ago
> So you claim that nobody in the US has died due to waiting for care, misdiagnosis, or inefficiency leading to magnified costs

i don’t thing they made those claims, at all…

atleastoptimal · 1d ago
The implication was that since I didn't work in a clinic I couldn't make any inferences or reference to facts sourced from the news. If that's the standard for truth then there's no point in making claims since anyone could just say "uhhh you're not a doctor so you can't say that" ad infinitum.
terminalshort · 1d ago
> I'm sure "move fast and break things" will work out great for health care.

It probably would if you quantify risk correctly. I'm not likely to die from some experimental drug gone wrong, but extremely likely to die from some routine cause like cancer, heart disease, or other disease of old age. If I trade off an increase in risk from dying from some experimental treatment gone wrong for faster development of treatments that can delay or prevent routine causes of death, I will come out ahead in the trade unless the tradeoff ends up being extremely steep in favor of risk from bad treatments.

But that outcome is very unlikely because for this to be the case the bad treatments would have to actually harmful instead of just ineffective (which is much more common). And it also fails to take into account the possibility that there isn't even a tradeoff and AI actually makes it less likely that I will die by experimental treatment gone wrong or other medical mistake, so it's just a win-win. And there is already evidence that AI outperforms doctors in the emergency room. https://pmc.ncbi.nlm.nih.gov/articles/PMC11263899/

Aaronstotle · 1d ago
The American Healthcare is so broken already that further breakage could be seen as an improvement.
davidw · 1d ago
A lot of people are about to find out with this admin, how things that were certainly imperfect can be so much worse.

Building things is tough; tearing them down is relatively easy.

davemp · 1d ago
So much naivety. It’s like the new grad reading parts of a 500kloc project and proclaiming that it can only be saved by a full rewrite.
QuadmasterXLII · 1d ago
That’s the attitude that got us here and I suspect we’ll ride it the whole way down
horns4lyfe · 1d ago
It’s really not, when was the last time anything happened fast in healthcare

No comments yet

joe_the_user · 1d ago
American Healthcare's "brokenness" involves massive bureaucracy, gate-keeping and processes that pressure providers to limit resources. But it does provide necessary things to people. A system that reduced the accuracy of diagnosis and treatment could still cost many lives.
toofy · 1d ago
i’m getting super fatigued on this change we’ve had where what used to be beta testing to a closed group of invested parties has morphed into what we have now.

from video games to major product roll outs to cars.

will all of the knowledge gained from this product research testing of AI on medicine be given away to the public in the same way university research used to be to the scientific community? or will this beta test on the public’s health be kept as company’s “trade secret”

if they’re going to “move fast and break things” with the public, in other words beta research on the public, then it’s incredibly worrisome if the research is hidden and “gifted” to a handful of their cronies.

particularly so when quite a lot of these people in the AI sphere have vocally many times declared they despise the government and that the government helping people is awful. from one side of their mouth chastise government spending money to boost regular communities of people while simultaneously using it to help themselves.

DrewADesign · 1d ago
> I'm sure "move fast and break things" will work out great for health care.

And the federal government at large.

OJFord · 1d ago
It's not even true, OpenEvidence is widely used and officially sanctioned.
TZubiri · 1d ago
Welcome to our pitch for DenyBot.ai

Our product automates a lot of the repetitive tasks for health insurance companies and increases reliability of responses and profit margins.

giantg2 · 1d ago
Aren't profit margins restricted by law? Ostensibly, any reduction in expenses should result in some reduction to patients.
mac-mc · 1d ago
You increase profits by expanding the amount of money put through the system, leading to a perverse incentive of ever increasing costs.
jordanb · 1d ago
The solution to this is to form a Health Group that combines insurance with a whole bunch of provider services. Then approve those claims that result in you paying yourself.
giantg2 · 1d ago
These are terrible. Then you have a very small in network option and get screwed when going anywhere else. This is basically what happens with CVS Caremark drug coverage - any recurring prescription must go through them or they won't cover it. They'll only cover one off prescriptions at competitors. It's really pretty terrible, especially if you need a compounded medication. I'm not sure how such an anticompetive racket is legally allowed to exist.
myhf · 1d ago
This new AI reduces payouts by $10MM. The licensing fee is $10MM. Profit margins stay within the legal limits.
TZubiri · 1d ago
"Just set the AI, KICK BACK and relax."

KickBackDeny.ai take notes YC

wredcoll · 1d ago
If there's one thing I'm concerned about, it's the profit margin of health insurance companies.
boomskats · 1d ago
I think that's the joke
wredcoll · 1d ago
Wow, I had to reread the parent like 3 times before I understood what I missed.
ausbah · 1d ago
static api that just returns “no”
blibble · 1d ago
don't forget the sleep statement to make it appear as if it's doing something
TZubiri · 1d ago
Inputs are routed to our prorietary 'DevNull' algorithms
moomoo11 · 1d ago
Move fast and break things is why we have progressed from chopping off people's limbs and giving them cocaine to now.

And healthcare is still far from perfect.

Imagine what healthcare in 2500 will be like.

KidComputer · 1d ago
> Move fast and break things is why we have progressed from chopping off people's limbs and giving them cocaine to now.

This is just false. Healthcare does not move fast.

y-curious · 1d ago
Anaesthetics in the form of ether went from official invention to worldwide use (in wealthy countries) in less than a decade. Many other medical inventions followed suit.

"The FDA moves slowly." Is a sentence I would agree with

BobbyJo · 1d ago
Not anymore, but people were just doing whatever like 100 years ago.
moomoo11 · 1d ago
That’s why people can’t get experimental drugs or therapies.
krapp · 1d ago
>Move fast and break things is why we have progressed from chopping off people's limbs and giving them cocaine to now

No it wasn't. The "move fast and break things" people were selling snake oil and alchemy while the actual science progressed slowly and deliberately, and the regulations around the latter were often written in the blood carelessly shed by the former.

lesuorac · 1d ago
> A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry

Move fast and break things I guess?

octopoc · 1d ago
Move faster so we get there first and control what breaks
qrios · 1d ago
„Advance the Science of AI 9“?

Is this a reference to the AMD chip, or just a fragment of a removed numbered list?

Edit: It‘s a fragment of the PDF-to-HTML [1]

1: https://news.ycombinator.com/item?id=44661843

dpkirchner · 1d ago
It's a failed copy/paste from the pdf version of the same content. Wouldn't expect better from these clowns.
HSO · 1d ago
after the pandemic, i see the pattern everywhere.

newsflash: it doesnt matter what you "plan". you wont do it. because you cant.

it´s called state incapacity. you´re institutionally incapable.

prediction: nothing will follow from this except the low effort stuff (i.e. nothing but speeches and expenses)

Frieren · 1d ago
> prediction: nothing will follow from this except the low effort stuff (i.e. nothing but speeches and expenses)

Hundred million contracts with zero results. Conservative ideology is based on the idea that certain people is just above others and they deserve more for free meanwhile working class health expenses are a luxury and need to be cut down.

whodidntante · 19h ago
I do not know why you need to bring political ideolgy into this. It is very easy to find contracts put together and run by Democrats that only serve to increase administrative bloat.

One example: $42.5B in the infrastructure bill to expand high speed internet access to rural communities.

Four years later, this funding is still in proposal stages, with final proposals due (not approved) at the end of this year. Absolutely nothing has been spent on broadband access, and it is likely to take at least another year or more before any real spending starts.

In the meantime, what has happened so far: - $810M spend on admin costs. Over $200M a year to run a program that does nothing. - there is a cap of 2% or $850M for admin costs, so there is already legislation on the way to exapnd this cap so that the program can continue.Admin costs will only increase after projects are underway because they need to be closely monitored. - Inflation has been 25% since this was approved, inflation for internet infrastructure has been 50%, so already, only half of the infrastructure envisioned will be implemented, but it will end up being more like 25% will see the light of day due to inflation and admin costs.

There are many other examples. Look up EV charging stations. Look were ARPA funds have gone.

Frieren · 3h ago
> I do not know why you need to bring political ideolgy into this.

Proceeds to discuss politics...

whodidntante · 9m ago
Ironic, isn't it ?

I am not sure how to address such a blanket statement as "Hundred million contracts with zero results. Conservative ideology is based on the idea that certain people is just above others" without providing counter examples.

I can also find plenty of examples from the other "side"

Point is, our government system is broken and is not able to do or build anything except keep to keep itself growing, and the fault is shared across the political spectrum.

The only thing the politics seem to be good for is to keep everyone appalled, in rage, and entertained with the idea that if only the "other" was not so inept, stupid, or ideological, things would be better.

Bread and circuses.

somenameforme · 1d ago
Undeserved and bloated government contracts are essentially a cornerstone of all big government (and one of the major arguments against it), it has nothing to do with any ideology.
_heimdall · 22h ago
I don't see this as a problem of only one party or one end of the political spectrum.

Congress is full of politicians getting rich off of investments that almost certainly are informed by insider information.

During the pandemic we saw plenty of examples across the political spectrum of those in charge pushing harsh rules and lockdowns on the public while ignoring it themselves.

The list could go on, bit this isn't a conservative problem even if it may be more prevalent there.

roenxi · 1d ago
> Conservative ideology is based on the idea that certain people is just above others and they deserve more for free

Any ideology that accepts taxation - practically all of them - believes this. It is impossible [0] to come up with a system that taxes one group without accepting that there is another group who are above them (who impose & enforce the taxes) and a group that is more deserving of the wealth (hence the taxes).

As far as practical results go it isn't possible to describe a flat society where everyone is equal. It doesn't even work on a micro scale, let alone a macro one. And everyone has an opinion on what the ideal wealth distribution looks like too.

[0] Not technically impossible, an island of extremely obese rationalists who approximate friction-less spheres might be able to roll with the idea.

klabb3 · 22h ago
> Any ideology that accepts taxation - practically all of them - believes [the idea that certain people is just above others and they deserve more for free].

Only if you anchor the baseline of "deserve" to private property rights and open markets. It's a fine foundation for civilization, but it's still "just like your opinion man". You could have different viewpoints of deserving, such as strongest-wins: "If I can steal 'your' stuff, I deserve it". This is how things work in nature. On the other extreme, you can say "everyone deserves exactly the same" (as in equal outcome). For the former, being imprisoned for theft is an intervention in their moral code, whereas for the latter, protecting free (in their view exploitative) markets is an intervention. Property rights fundamentalism is kind of radical centrism in the grand scheme of things.

roenxi · 22h ago
>> Any ideology that accepts taxation - practically all of them - believes [the idea that certain people is just above others and they deserve more for free].

> Only if you anchor the baseline of "deserve" to private property rights and open markets.

Say someone has an ideology where they believe 70 year olds shouldn't have to work and need to be provided for by the community. What aspect of that would be anchored to private property and open markets? You could believe that and also believe in communal property and closed markets.

palmfacehn · 19h ago
There's nothing in the pure argument for private property which contradicts a moral obligation to support the downtrodden. The purists would only insist that the support be offered voluntarily. I'm somewhat disappointed to see the assumption to the contrary repeatedly made on this site.

Advocacy for private property doesn't start from a motive of greed. Rather, proponents regard it as the best way to responsibly manage scarce resources and create abundance. After all, there is no charity without abundance.

Private property and open markets create the incentives for value creation and increased productivity. While central planning may be able to achieve these ends theoretically, in practice we find that the incentives of the bureaucrats and insiders often limit productive opportunities. The "Economic Calculation Problem" is another huge barrier for successful state management.

So while the sales pitch for socialized management of resources often involves "equality of outcome", it often results in the lowering of productivity generally. Worse yet, centralized bureaucratic control of scarce resources incentivizes favors to large industrial concerns, politically connected classes and elites.

Obviously there will be those who disagree with this analysis. I only object to the misstating of intent.

klabb3 · 21h ago
Right, of course. But there's a difference in moral code on: who is the rightful owner of the surplus generated, and how should it be redistributed.

a) private ownership, even charity is directly immoral (Ayn Rand)

b) private ownership, no state redistribution but charity is morally compelled (religious conservatism)

c) private ownership, reluctantly accept state redistribution to prevent social- or system tragedy. (US republicans and democratic establishment)

Note that all of the above are what I'd call property rights fundamentalists. Then you have:

d) mixed ownership: surplus morally belongs both to you, and the system that allowed you to do business in the first place (US progressive liberals + most of the world's centrists)

Here's where the rest of the world generally reside. There's endless diversity and constant debate about the how, the who and the how much.

The problem with the US "left" is that it's split: the liberal progressives are in (d) but the establishment remain in (c): they're subconsciously conceding to property rights fundamentalism while advocating for redistribution, which puts them in a constant uphill battle to "immorally" extract value from the rightful deserving class of billionaires and business owners. That's also why democrats are considered right-wing by policy compared to the majority of the Western world.

Personally, I think this is why Bernie, Mamdani, AOC etc gets subject to such disproportionate attacks. Fiscal policy-wise they're pretty meh (just go back a few decades in the US and you'll find the same), plus their real-politik influence is also pretty mid. BUT, the real issue is they're shifting the moral baseline from (c) to (d), which is an extremely dangerous perspective shift for established interests. Rhetoric like "pay your fair share" is unacceptable to the hegemony.

jonplackett · 1d ago
It’s a long running problem we (uk, USA, eu) have - trying to solve real world problems with pen and paper policies.
oaiey · 22h ago
Stupid thing: Sometimes it works ;)
newsclues · 1d ago
In Canada with a state broadcaster you can get credit from low information votes just by announcing a program without ever implementing or funding it. Pravda!
mbgerring · 1d ago
This is suicide:

> We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to “Build, Baby, Build!”

mbgerring · 1d ago
This is how you know these people are not serious:

> Prioritize the interconnection of reliable, dispatchable power sources as quickly as possible and embrace new energy generation sources at the technological frontier (e.g., enhanced geothermal, nuclear fission, and nuclear fusion). Reform power markets to align financial incentives with the goal of grid stability, ensuring that investment in power generation reflects the system’s needs.

None of these are "dispatchable power sources." Grid-scale batteries, for which technology and raw materials are abundant in the United States, are dispatchable power sources, and are, for some reason, not mentioned here.

What they will actually do is eviscerate regulations to allow for more construction of natural gas power plants, but they won't mention that here, because any sane person would immediately identify that as a terrible idea.

twright · 1d ago
Additionally, the DOE has been pulling funds from interconnect projects that have been years in the works! Apparently there is a modest gas turbine shortage so even natural gas won’t get that far. I’d say it’s a great way to hit a hard wall fast but again, they are not serious. We’re gonna get nowhere fast, maybe even drift backwards a bit.
zamadatix · 1d ago
Nuclear fission is most often categorized as dispatchable, though it's typically in with the slowest of that group when it is. Of anything out of this administration, a push for more fission power might be the thing I agree with the most as well though, so perhaps that's biasing my read.

Commercial nuclear fusion is just a dream at this point. We might as well debate whether my private island has enough room for an airplane runway or not instead. But hey, I'm not against continuing fusion research if that's all they mean.

EGS I'm far less familiar with but it'd be odd for the current admin to agree with the previous admin unless they had to https://www.energy.gov/sites/default/files/2022-09/EERE-ES-E... and it would, on the surface, make sense one could design these systems to support flow rate variability?

Grid scale batteries are power storage, not power sources. I do agree it's a damn shame they aren't brought up elsewhere in the report though. Same as anything else about renewables missing in tandem with that.

ineedaj0b · 1d ago
you can figure out with math, climate change is solvable with tech advancement. also the US has pretty clean energy, and likely always will because of fear from future administration changes.

one should be more worried about china or india polluting than the US.

IAmGraydon · 15h ago
Sorry to burst your bubble, but the US has higher greenhouse gas emissions per capita than both countries you mentioned. Also, in terms of just total greenhouse gasses, the US also emits more than India and is only outpaced by China.

https://en.wikipedia.org/wiki/List_of_countries_by_greenhous...

https://en.wikipedia.org/wiki/List_of_countries_by_greenhous...

wormius · 1d ago
Looking forward to Mechahitler ~~elected~~ first AI president in 2049.

Sorry... not elected... sworn in... with the book 'To Serve Man'

hopelite · 1d ago
It’s far more likely that it all go down along the lines of the scene from Terminator … what was it? … (paraphrasing) “You better turn on Skynet right now, General, because I need to look good to the defense industry”
stpedgwdgfhgdd · 17h ago
Remarkable that an official White House plan uses the term “America” instead of United States of America.
sschueller · 17h ago
Nothing new. There is the MLB World Series [1], World Wrestling Entertainment and many others that don't have anything to do with the "world".

[1] https://en.m.wikipedia.org/wiki/World_Series

tsunagatta · 17h ago
I do not see the connection, those are not arms of the U.S. government.
dakial1 · 1d ago
"Ensure that Frontier AI Protects Free Speech and American Values AI systems will play a profound role in how we educate our children, do our jobs, and consume media. It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas."

It seems that everywhere free speech is mentioned today, the intent is to do the exactly opposite....

antonvs · 1d ago
> objectively reflects truth

Someone desperately needs a philosophy course…

andsoitis · 1d ago
Isn’t this just another version of the sentiment “follow the science”?

It isn’t like they’re gonna force AI companies to have their systems declare that God created the universe a few thousand years ago…

No comments yet

loco5niner · 1d ago
Quick exercise: just scrolling down, count how many pictures don't highlight one man front and center.

https://www.ai.gov/

Then click "fact sheets", "remarks", and "articles". He's everywhere.

That's how unbiased this is going to be.

(hint, the answer is one)

soulofmischief · 1d ago
I don't think a term yet exists for the practice of putting your name all over documents and media with such frequency that cleanup during following administrations takes several years, undermining and delegitimizing the new administration for your followers in the process.

Some kind of sick soft power move that I expect we will be seeing a lot more of.

No comments yet

pjc50 · 1d ago
What's American for "juche"?
educasean · 1d ago
MAGA
Cornbilly · 1d ago
Personality cults are really quite weird, aren't they?
baron816 · 1d ago
> We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma

> This initial phase acknowledges the need to safeguard existing assets and ensures an uninterrupted and affordable supply of power. The United States must prevent the premature decommissioning of critical power generation resources

Yeah, they're going to do all they can to block cheap renewables and give handouts to fossil fuel companies.

burkaman · 1d ago
For example, today they defunded a critical transmission line in order to block renewable power from getting to people who need it: https://www.energy.gov/articles/department-energy-terminates...
giantg2 · 1d ago
I wonder what the actual cutoffs are. The article is scarce on details but does seem to point to one political side or the other acting politically - either fast tracking approval for a fiscally irresponsible project, or pulling funding because they disagree with renewables.
burkaman · 1d ago
The "fast tracking" accusation is not credible without evidence. This loan took years to go through the LPO process, here's an example filing from 2022: https://www.federalregister.gov/documents/2022/12/16/2022-27....
giantg2 · 1d ago
"The "fast tracking" accusation is not credible without evidence."

It was approved in a flury of approvals between the election and inauguration. So that's not direct evidence. However, do you have direct evidence that the revocation was just based on it being renewable? Sure, there's circumstantial evidence of it. If there's circumstantial evidence for both positions, we need some hard evidence to support either one.

burkaman · 1d ago
> It was approved in a flury of approvals between the election and inauguration.

Do you have evidence of this? Maybe a list of all LPO approvals so we can look for increased frequency after the election? It would also help to know the average LPO timeline, so we could look at when the grain belt express applied and see if it was approved unusually quickly.

> However, do you have direct evidence that the revocation was just based on it being renewable?

Not exclusively, but there is evidence that opposition to green energy was one of the major factors. See Josh Hawley's statements, where he repeatedly highlights the "green" aspects and likes to call it a "green scam": https://www.hawley.senate.gov/hawley-wins-cancelation-of-gra..., https://x.com/HawleyMO/status/1943408766629650779. The current Secretary of Energy is also strongly opposed to expansion of renewable energy, see this recent speech: https://www.energy.gov/articles/secretary-energy-chris-wrigh....

giantg2 · 1d ago
"Not exclusively"

You could run the numbers to show if it's financially responsible or not instead of again using circumstantial evidence. We can also look at the other approved LPO grants, like the one for sustainable aviation fuel.

Here's an article about how they changed their methods to push more through due to their political concerns.

https://cen.acs.org/energy/US-cleantech-loan-program-sprints...

giantg2 · 1d ago
They're restarting some nuke plants too. This seems like a decent idea given the power demands of data centers.
oxryly1 · 1d ago
Which ones? AFAIK the nuke plants that shut down are very cost ineffective.
giantg2 · 1d ago
I believe Three Mile Island is restarting specifically to power planned data centers.
philipkglass · 1d ago
HN story from last year:

"Three Mile Island nuclear plant restart in Microsoft AI power deal"

https://news.ycombinator.com/item?id=41601443

dismalpedigree · 1d ago
Palisades in Michigan is restarting also
cowpig · 1d ago
Within the scope of US energy, restarting old nuclear plants has a negligible impact. Less than 1% net gain.

Building new ones will take 10+ years, and the climate crisis is a today problem.

Also, at the rate technology is changing, building new nuclear plants seems silly.

giantg2 · 1d ago
"Building new ones will take 10+ years, and the climate crisis is a today problem.

Also, at the rate technology is changing, building new nuclear plants seems silly."

You're right, technology is changing quickly. There are plenty of new reactor options, including small modular types which would be faster to build. This doesn't seem silly.

Workaccount2 · 1d ago
Someone needs to start naming the largest solar and wind farms after Trump.

"We are proposing the largest solar farm in the world, in order to capture the sheer magnitude and capability of the most powerful solar plant to date, we propose calling it the Grand Trump Energy Generation Field"

The dudes ego would prevent him from blocking it.

burkaman · 1d ago
Wouldn't work because the announcement of the name is what makes him happy, he doesn't care about following through to make the thing actually succeed. Nearly every business with his name on it has failed.
andsoitis · 1d ago
At first I thought your plan is brilliant. But then I realized he will try sue for unlicensed use of the brand OR defamation OR both.
wredcoll · 1d ago
When I think "vast new sources of energy for ai", my mind immediately goes to Coal Power!
jiggawatts · 1d ago
"But we do know it was us that scorched the sky." -- Morpheus
andsoitis · 1d ago
> Yeah, they're going to do all they can to block cheap renewables and give handouts to fossil fuel companies.

Many AI people in positions of influence have argued that AI will all but solve the climate crisis.

Viewed from that angle, it would make sense that you wouldn’t care about how dirty the sources of energy is on the way to AGI because once there, the climate crisis will be magically solved. Somehow.

Henchman21 · 1d ago
Some people really need to re-watch The Matrix
dwaltrip · 1d ago
They aren't neutral on energy sources or simply agnostic to environmental impact.

They are anti-renewable, because renewable = woke. Tribal politics at its best.

andsoitis · 1d ago
> They

Who is the “they” in your sentence?

dwaltrip · 19h ago
The current administration, maga folks, the freshly anti-woke techies. Etc

Top level comment was referencing the energy policies mentioned in the linked site, which of course left out renewables entirely.

reactordev · 1d ago
I'm not surprised considering the White House itself released a deep fake. Which is going to be interesting considering the last sentence of this.
pyuser583 · 21h ago
> AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today.

I'm glad it's focusing on the challenges that are proven, not speculative.

etothet · 1d ago
The design of this website is horrendous. Poor information density, inconsistent vertical spacing, haphazard font choices, lists that aren’t lists, and more.
walls · 1d ago
It's like those email scams, it's meant to appeal to a specific kind of person.
kelseyfrog · 1d ago
So the US government sees AI as a sphere of propaganda and wants AI output to align with political goals. Great. AI is going to be aligned, but in the worst way possible.
crinkly · 1d ago
Nailed it.

No technology scares me. It's the hands it is in.

xAI mechahitler was a warning.

academia_hack · 1d ago
Everyone remotely competent in AI in the federal government that I know has quit in disgust over the past 6 months. I know zero talented AI people who are looking to take a cut in pay, benefits, and career stability to sign up for a new job working for this administration.

As a result, there's zero chance even the sensible parts of this strategy won't just end up coopted into multi-billion dollar Palantir contracts to deliver outdated llama models behind some clunky UI with the word "ontology" plastered on every button.

Seb-C · 1d ago
After winning the first cold war with a notable race to the moon, it feels now that the USA is doing everything to lose the second one with a race to the bottom.

As a non American, I just hope they don't take too long to reach it. While I'm thankful for the positive influence that the USA had in the last century, lately I feel like they only have a negative one, notably by poisoning our societies with unregulated big tech and social networks.

Whatever comes next, I can only hope that this wave of AI generated falsehoods is the last straw.

kmlx · 1d ago
> lately I feel like they only have a negative one, notably by poisoning our societies with unregulated big tech and social networks.

do you feel like the censorship/regulation/big state mantra that european governments are fans of are also poisoning our societies?

> Whatever comes next, I can only hope that this wave of AI generated falsehoods is the last straw.

the AI wave is just beginning.

FirmwareBurner · 1d ago
News headlines in 10 years will be like:

  Europe seeks investments to catch up lost ground in AI

  How Europe lost the AI race

  Europe seeks to invest in AI sovereignty after US & China dominated the world
Basically a repeat on how EU lost the tech and space race, but with AI this time. They just don't learn.
preisschild · 1d ago
> lately I feel like they only have a negative one

As a European, I still liked them helping out Ukraine

dangoodmanUT · 1d ago
There’s a lot of text on this page that’s slightly off center and it bothers me IMMENSELY
z5h · 1d ago
When is a good time (in the timeline of mankind) to stop being adversarial with AI?
giantg2 · 1d ago
The real question is whether mankind can refrain from being adversarial. The ability must proceed the timing.
maddmann · 1d ago
I wonder how many big tech lobbyists got there hands on this and shaped it?
Aeroi · 1d ago
LOL, nice try sacks. Also wonderful to see our president doesn't let his ego get in the way of an action plan by plastering his name all over something he has nothing to do with.
energy123 · 1d ago
No mention of making skilled immigration easier. The majority of individuals with an IQ above 140 currently live outside of the United States.
weberer · 1d ago
Those people would already qualify for an O-1A visa.
simpaticoder · 1d ago
Terry Gilliam needs to reboot his Brazil (1985) franchise. When you integrate half-backed AI into life-and-death decisions, you get tragic absurdity on an inhuman scale. It may even be stable.
anon7000 · 1d ago
It’s tough to have “human flourishing” (which they mention as a goal) when things like health insurance are in such a shit situation. AI could help health insurance deny more claims, for sure. That’s not human flourishing though. (And my biggest gripe with capitalism is that at a certain late stage in many sectors, human flourishing is completely at odds with making profit.)
GuinansEyebrows · 1d ago
They’re not talking about universal human flourishing. Just the subset of the investor class who stand to benefit.
jrockway · 1d ago
The investors can invest in the healthcare industry. More consumers of healthcare = more money for them!
gsky · 23h ago
I'm buying more AI stocks
whynotminot · 23h ago
Which ones? Most of these AI companies are still not public.
gsky · 23h ago
Nivida & Broadcom
jjcm · 1d ago
I find it fascinating that the webpage has more pixels focused on Trump than AI.
elAhmo · 1d ago
Landing page is mainly pictures of Trump from various angles and orders and media articles about this administration pushing for things that were lobbied by close allies of this convicted felon.

Not sure why would anyone pay any attention to what this administration says, when it can change in a very short time.

daniel_iversen · 1d ago
Looks like 'safety' isn't a word on the front page.
TheAceOfHearts · 1d ago
The two best parts for me are:

1. A push towards open source / open weight AI models.

2. A push towards building more high quality datasets.

There's no mention of studying and monitoring the social impact of AI, but I wouldn't have expected otherwise from this administration. I suspect that we may look back on this as a big mistake, although I'd really love to be proven wrong.

At a press conference today Trump seemed to suggest having minimal restrictions related to copyright for AI researchers [0]. It's not clear if big AI companies will just get a administrative pass to do whatever they want / need in order to compete with China, or if we can expect some kind of copyright reform in the next few years.

[0] https://twitter.com/Acyn/status/1948138197562855900

mortarion · 1d ago
All the big AI providers are going to have to deploy an American version and an international version after this.

The EU will never approve of an LLM that has been aligned to regurgitate US propaganda as truth.

sunaookami · 1d ago
>The EU will never approve of an LLM that has been aligned to regurgitate US propaganda as truth.

Huh??? That's exactly what is happening right now. ChatGPT, Claude, Gemini, etc.

khalic · 1d ago
Well there’s an ugly website :D
toddwprice · 1d ago
Reads like somebody prompted ChatGPT: "Write an AI action plan for the United States in the voice of Donald Trump."

<returns stilted text at 3rd grade reading level>

"Make it sound smarter".

And voila, ai.gov is born.

evolve2k · 1d ago
Quick, what else can we announce?
sciencesama · 20h ago
All talk no plan !!
megamix · 23h ago
AI? Never heard of him
ActionHank · 23h ago
I wish him well.
j_timberlake · 1d ago
Remember when Trump accepted an endorsement from an AI Taylor Swift? That's the level of competency I'm expecting here.
siliconc0w · 1d ago
Weird - no mention of harassing the international students that make up the majority of AI researchers or blocking solar, the only power generation that is currently deployable.
panny · 20h ago
>Winning this race will usher in a new era of human flourishing

Source? All I've seen as a result of AI is something to take the blame for layoffs. Well, that and a whole lot of copyright infringement laundered through AI.

gtoast · 1d ago
LOL Its awesome its the Trump administration guiding us through this delicate and important issue. Should turn out great.
__loam · 1d ago
0 mention of the word copyright
2OEH8eoCRo0 · 1d ago
What speech? When AIs give a response it's protected speech?
TZubiri · 1d ago
This web template would make for a great minimalistic wedding invitation
Ancalagon · 1d ago
Defunding the education department will definitely help with that “skilled workforce” bit. Although I know Sacks doesn’t actually give a crap.
dinkumthinkum · 1d ago
I think you are conflating the concept of "education" and the US DoE. The U.S. spends more per pupil than any other nation. In fact, schools such as Baltimore public schools receive more funding than any other and have atrocious outcomes. This would be like saying someone being against a "Ministry of Truth" is someone that wants propaganda because "truth" is in the name.
ivape · 1d ago
Why does this look like a website about weddings?
rafram · 1d ago
It does have a certain Sam & Jony feel to it: https://openai.com/sam-and-jony/
Cornbilly · 1d ago
These people have completely disappeared up their own ass.
ivape · 1d ago
Ugh, can’t believe I missed their wedding.

Also, the desktop version of the site is the one in question. Mobile looks like a pdf.

actionfromafar · 1d ago
I didn't know openai dabbled in satire...
Scubabear68 · 1d ago
Pretty soon all ML related government contracts are going to go to a company named Trump Intelligence.
roschdal · 1d ago
"America is in a race to achieve global dominance in artificial intelligence (AI). Winning this race will usher in a new era of human flourishing, economic competitiveness, and national security for the American people."

What about the people who are not American?

socketman · 1d ago
Well, it is called the US Ai Action Plan to be fair
trkaky · 1d ago
“Build, Baby, Build!”
bgwalter · 1d ago
The summary is that they want to eliminate regulations to facilitate the steal and disregard consumer rights.

And build data centers, as emphasized for the 100th time since inauguration.

If Murdoch succeeds with his recent WSJ campaign and gets Trump to resign or similar, brace for Vance and the AI bros. These schemes are literally devised by people who funded cannabis and Adderall distribution sites and have done nothing noteworthy.

derelicta · 1d ago
Is this one gonna promote mecha Hitler?
cess11 · 1d ago
Chatbots are more addictive than so called social media, with few exceptions like TikTok.

They also solve the problem of publicity. When someone goes insane on Facebook it's rather visible, unlike when someone goes insane with a chatbot. Unless they publicise their descent, like Geoff Lewis seems to do.

Which means it'll be harder to detect when people are being deliberately manipulated, like it was pretty obvious which role Facebook played in e.g. Myanmar and Ethiopia.

How would you act if you wanted to make sure that the people that can perform the technical proliferation won't revolt against it?

yapyap · 19h ago
bull shit
lovich · 1d ago
There’s this whole section about biosecurity and how AI is going to help malicious actors synthesize nucleic acid(gotta get the word count up I guess is why they don’t say DNA)

Then in the recommended policies it references multiple times that there will be nucleic acid testing set up to catch malicious “customers”

Is this policy targeted towards the Covid lab leak conspiracy or are they just aiming for officially collecting everyone’s DNA samples?

Maybe both

XorNot · 1d ago
No it's a longtermism ideology thing. For whatever reason they're terrified of some idea of "garage bio warfare" but have absolutely no understanding of how biology actually works so they've zeroed in on the idea that it's just: synthesize DNA -> superplague.
foxglacier · 1d ago
You must have not got the memo. The covid lab leak conspiracy theory turned out to be probably true and an actual conspiracy.
lovich · 1d ago
> Is this policy targeted towards the Covid lab leak conspiracy or are they just aiming for officially collecting everyone’s DNA samples?

I did not use the word “theory” in my comment

jasonlotito · 23h ago
Side Note: Trump relying on far left, woke technologies once again is amusing.
benrutter · 1d ago
I'm gonna ignore talking amount the abysmal current US administration and just share my immediate experience of using this site because it was funny to me:

- I open the site in android mobile: "swwwoooooosh" a big slow animation reveals the text

- After reading the the text I think I'll take a look at the home page: "swwooooooosh" the same animation rolls again as I load a very strange full screen image of Trump in black and white

- I click the hamburger menu icon: "swwoooooosh" the four menu items slowly slide into full screen

- There is visible no option to close the menu for me, I could probably refresh but decide I'm done here

jacobgkau · 1d ago
Seeing as there are only four items in the menu, it seems if you open the menu and want to get back to the page you were already on, you're supposed to just click the title of the page you were on.

The animations are a bit much. The scrolling horizontal rules repeating the words "AMERICA'S AI ACTION PLAN" underneath each "Pillar" header were confusing for a brief moment.

yard2010 · 1d ago
The closing mechanism was 2-15 prompts away. Don't forget that this is still a government website.
can16358p · 1d ago
> Removing Red Tape and Onerous Regulation

This is the sole reason EU will never, ever catch up in big tech unless they get rid of regulations.

mapcars · 1d ago
And this is also the reason US will never put customer rights above big tech interests.
4gotunameagain · 1d ago
We're fine with not catching up to the likes of facebook, openai et al while maintaining very livable wages with actual work life balance & no need to toil ourselves away to have things as basic as health insurance.
nxm · 1d ago
Don't forget the sky-high taxes and energy costs as EU is quickly regulating itself into lower quality of life
preisschild · 1d ago
Doesn't mean we should deregulate everything
Shaddox · 1d ago
Probably the only thing holding the reins on AI takeover is the lack of legislative coverage. Once the AI user is not held responsible for the AI's actions, (and let's face it, this is the likely outcome. It's not like politicians are going to decide in favor of regular people) then we are going to have a new social class of completely useless people.

What does the government plan to do with them? Kill them off? Because if they leave them to die, they will revolt.