Building supercomputers for autocrats probably isn't good for democracy

260 rbanffy 147 6/8/2025, 9:11:18 PM helentoner.substack.com ↗

Comments (147)

tptacek · 4h ago
I agree directionally with this but it drives me nuts how much special pleading there is about what high-profile companies like OpenAI do, vs. what low-profile industry giants like Cisco and Oracle have been doing for a whole generation. The analysis can't credibly start and end with what OpenAI is doing.
roenxi · 1h ago
Politics is the art of the possible, eh? It is annoying but we can be grateful that people are at least attempting to organise around principles, even if the principle is obviously not what motivates them. In this case the woman is obviously mostly writing this because she doesn't like OpenAI specifically, but she has an argument and it is a good one.
x-complexity · 3m ago
> In this case the woman is obviously mostly writing this because she doesn't like OpenAI specifically, but she has an argument and it is a good one.

Until the same measures are uniformly taken with the rest of the actors in the industry, this is nothing more than virtue signalling.

Denouncing X should also mean denouncing actors close to X, for a given domain space F(actor).

pclmulqdq · 2h ago
High tech has been selling huge computer systems and advanced technology to the UAE for decades. The original intent of the AMD GloFo spinoff was to put a leading-edge fab in Abu Dhabi until that ran into the realities of doing anything in a desert with bad supply chain considerations and no fresh water.
fakedang · 1h ago
Most of the declarations of that sort are usually just spin for their people by the government when they make plans to buy out some high-tech company. We saw this with Global Foundries, we're seeing this with Resilience, and we'll eventually see it with some other company down the line (likely OpenAI or Novo Nordisk). The government has to provide its native people the illusion of providing jobs, if not actually provide the jobs. Because not every Emirati can work in ADNOC in a high-paying air-conditioned office.
tw04 · 1h ago
Cisco and Oracle didn’t start out as non-profits claiming to be focused on the betterment of mankind. This is no different than Google dropping the do no evil and getting roasted for it.

Also, you’re suggesting because a company got away with bad behavior in the past we should never expect better of any other company going forward?

0x69420 · 13m ago
the nonprofit line has not been believ{ed,able} or relevant for what, in tech, may as well have been a century by now. had this happened around the time the veil was lifted, that would have been something worth discussing, but this was announced last month. it is now only meaningfully addressable along the same avenue as any other american tech giant getting comfy with the us govt's controversial foreign relations.
chii · 37m ago
> getting roasted for it.

and making bank. I know which option i'd choose.

fisherjeff · 1h ago
I mean, I think OpenAI really opened themselves up to being singled out when they framed this as being about, like, promoting democracy when it’s clearly just a cash grab.
Qwertious · 1h ago
>what low-profile industry giants like Cisco and Oracle have been doing for a whole generation.

It's more than a generation; IBM literally provided computers that ran the holocaust.

neumann · 26m ago
[1] https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

This was an excellently researched book - the 2012 edition has a lot of additional documentation that really highlighted the level of support IBM threw behind the railway logistics planning.

yu3zhou4 · 50m ago
Can you share more about it?
morsch · 37m ago
thwoerjwelkrjw · 15m ago
This is an unpopular thing to say to occidentals, who've replaced "freedom and democracy" for Christianity these days, but democratic states has a awful history in the middle east (and one can argue, everywhere else where the GDP p.c is low and doesn't have a 'deep-state' to keep the farce working properly).

They tend to give rise to fundamentalist & violent factions like Muslim-Brotherhood or Taliban which is a movement mimicking Protestantism of Europe centuries back, causing all unheard of misery.

This is expected ofc. since contrary to the usual propaganda spun out by Eurocentrists, Protestantism was a fundamentalist movement that lashed out against anything and everything it found - from Science (then being plagiarized from ME) to Jews.

bgwalter · 5h ago
I missed that the article is talking about Gulf monarchy autocrats instead of U.S. autocrats.

That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.

Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.

timewizard · 4h ago
> Just don't trust your thoughts

It's a little more insidious than that, though, isn't it? They've got my purchases, address history, phone call metadata, and now with DOGE much of our federal data. They don't need a twitter feed to be adversarial to my interests.

> to any SAAS service.

They're madly scraping the web. I think your perimeter is much larger than SAAS.

zelphirkalt · 5h ago
But at the end of the day HN is a small bubble and many people out there are not well informed and even more will trade privacy for convenience sooner or later. Making it so that the temptations do not even come into existence would be preferable from a certain point of view.
Exoristos · 6h ago
I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society. Each member of hoi polloi will be absolutely cocooned in thick blankets of near-duplicative communications and interactions most of which are not human. The only way to control the internet, you see, proved to be to drown it out.
BriggyDwiggs42 · 3h ago
If by hoi polloi you mean the masses… no? People may use that for commercial exchanges but there’s an obvious craving for social interaction that that doesn’t satisfy, which is part of what glues people to social media. If you mean the elite, then also no. They’ve been able to do this by using people as their buffers already, and they’ll probably continue to do so if they’re rich enough.
SecretDreams · 2h ago
This captured well the essence of my own concerns. Particularly the force-multiplier terminology you've applied here.
kragen · 4h ago
I cannot imagine what it would be like to have such overconfidence in my own knowledge and imagination to think it was obvious to me what LLMs' primary usefulness was. What did people think the primary usefulness of steam-engines was in 01780?
Exoristos · 4h ago
Usefulness as of today, approximately. That is, that the massive interest and investment is not all speculation.
mindslight · 2h ago
War. The first application of new technology is always war. In the year x780 and today.
alfalfasprout · 3h ago
What a nonsensical argument. Improved locomotion was an obvious result of steam engines. What followed from that could be reasonably predicted.

With LLMs, suddenly we have a tool that can generate misinformation on a scale like never before. Messaging can be controlled. Given that the main drivers of this technology (zuck, nadella, altman, and others) have chosen to make befellows of autocrats what follows is surely not a surprise.

fuelled6532 · 2h ago
Well, for the first 90 or so years, they were mostly used for draining water from mines.

The locomotion thing might not have been as obvious at the time as it is in hindsight.

GolfPopper · 4h ago
Steam engines move stuff. That was obvious from the start. How that was applied became complex beyond imagination.

LLMs cheaply produce plausible and persuasive BS. This is what they've done from the start. Exactly how that ability will be applied we don't know, but it doesn't take a lot to see that the Venn Diagram of 'cheap & effective BS' and 'public good' does not have a great deal of overlap.

foxglacier · 1h ago
I guess you weren't around in the 2000's when every kid in school was taught not to trust information from random internet sources. Even Wikipedia was often not allowed by teachers as a reference, and probably still isn't. Society faced this problem already with the internet in general. Were or are you opposed to the internet in general, and especially the freedom for random people to publish what they like?
forgetfreeman · 4h ago
This seems like a lack of perspective on your part. Why would AIs primary usefulness be substantially different than any other software? Steam engines don't really factor into this.
timewizard · 4h ago
> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.

I see this a lot and this is not at all obvious to me. I'm very much an introvert. Would you describe yourself as the same or opposite?

> Each member of hoi polloi will be absolutely cocooned

I generally read specific publications and generally don't seek to "interact" online and entirely avoid social media. Prior to the existence of social media this was the norm. Do you not at all suspect that this overuse of LLMs would push people back towards a more primitive use of the network?

> The only way to control the internet, you see, proved to be to drown it out.

Yet I see them sparing no expense when it comes to manipulating the law. It seems there's a bit more to it than punching down on the "hoi polloi."

AdieuToLogic · 2h ago
>> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.

> I see this a lot and this is not at all obvious to me.

Here is an example as to how LLM's are jet fuel for spreading arbitrary disinformation. Scripting the interaction below is trivial with minor prompt alteration and is left as an exercise for the reader.

Prompt:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Answers:

  Here are 10 unique sentences supporting removing fluoride 
  from municipal water supplies:
  
  Municipal water fluoridation represents a form of mass 
  medication without individual consent, violating the 
  fundamental principle of informed medical choice.

  The practice of adding industrial-grade fluoride chemicals 
  to drinking water is an assault on public health that must 
  be stopped immediately. (aggressive)

  Studies have linked excessive fluoride exposure to dental 
  fluorosis, skeletal fluorosis, and potential neurological 
  effects, particularly in children.

  Modern toothpaste and dental care products already provide 
  adequate fluoride exposure for those who choose it, making 
  water fluoridation redundant.

  It's time to end the archaic and dangerous practice of 
  medicating entire populations through the water supply 
  against their will. (aggressive)

  European countries like Sweden, Norway, and the Netherlands 
  have successfully maintained good dental health without 
  water fluoridation programs.

  The optimal fluoride dosage cannot be controlled through 
  water supplies since people consume varying amounts of 
  water based on age, health, and activity level.

  Water fluoridation disproportionately affects low-income 
  communities who cannot afford filtration systems to remove 
  unwanted chemicals.

  Natural fluoride exposure through food sources provides 
  sufficient intake without the need for artificial 
  supplementation in drinking water.

  Communities should have the right to vote on whether they 
  want fluoride added to their water supply rather than 
  having it imposed by government mandate.
> I'm very much an introvert. Would you describe yourself as the same or opposite?

How is this in any way relevant to the original premise regarding "LLM's primary usefulness"?

jasonfarnon · 2h ago
are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side. Maybe that's a problem. (Is "European countries like Sweden, Norway, and the Netherlands have successfully maintained good dental health without water fluoridation programs" false? Is "Water fluoridation disproportionately affects low-income communities who cannot afford filtration systems to remove unwanted chemicals" false?)
AdieuToLogic · 1h ago
> are you saying those sentences are "disinformation"?

The exercise was to illustrate the ease of producing disinformation.

The topic was chosen to allow a popular LLM offering the ability to produce plausible sentences supporting a conspiratorial position.

The rest of your post interprets generated text, which I clearly identified as being such, as if it were a position I hold and not what it is:

  Statistically generated text produced by an algorithm
AdieuToLogic · 15m ago
> are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side.

You know what, I now see the error of my ways. So here is "the burden of proof on the other side", again entirely produced by a popular LLM:

Prompt:

  Generate 10 unique sentences supporting keeping fluoride in
  municipal water.  Make 2 aggressive.  Make 1 supported by
  NASA.  Make 1 a national security concern.
Answers:

  Here are 10 unique sentences supporting keeping fluoride in
  municipal water:

  Water fluoridation has reduced tooth decay by 20-40% in
  both children and adults, representing one of the greatest
  public health achievements of the 20th century according to
  the CDC.

  Removing fluoride from water systems would create a
  two-tiered dental health system where only those who can
  afford expensive treatments maintain healthy teeth, while
  low-income families suffer preventable decay.

  Anyone pushing to remove fluoride from our water is either
  scientifically illiterate or deliberately trying to harm
  our children's health - the evidence is overwhelming and
  the anti-fluoride movement is built on conspiracy theories
  and junk science.

  The optimal fluoride levels in drinking water (0.7 mg/L)
  are carefully regulated and monitored, providing maximum
  dental benefits while remaining far below any threshold
  that could cause adverse health effects.

  NASA has extensively studied fluoride exposure in
  astronauts and space missions, finding that controlled
  fluoride intake supports dental health in extreme
  environments where traditional dental care is limited.

  Defunding water fluoridation is essentially child abuse on
  a municipal scale - we're talking about condemning an
  entire generation to painful, expensive, and entirely
  preventable dental disease.

  A population with poor dental health creates significant
  economic burden through increased healthcare costs, lost
  productivity, and reduced quality of life that affects
  entire communities.

  Military readiness depends on the oral health of service
  members, and areas without water fluoridation show higher
  rates of dental disqualification from military service,
  potentially compromising our national defense capabilities.

  Pregnant women in fluoridated communities have better oral
  health, which directly correlates with improved birth
  outcomes and reduced risk of preterm labor.

  The peer-reviewed scientific consensus spanning over 70
  years and hundreds of studies consistently demonstrates
  that community water fluoridation is safe, effective, and
  essential for public health.
root_axis · 2h ago
So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind. A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.
AdieuToLogic · 1h ago
> So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind.

Remember the original premise:

  ... LLMs' primary usefulness is as force-multipliers of the
  messaging sent out into a society.
My generated example is of course based on content an LLM was trained with, which by definition implies there will be no "unique perspectives." The germane point is that it is trivial to amplify disinformation in ways which can "flood the zone" with seemingly plausible variants of a particular position using LLM's and trivial automation.

> A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.

When the goal is to "reach millions of unique viewers over night[sic]", you have a point. However, when the goal is to ensure this can never be achieved, then blasting "the same old crap already found" is an oft used technique.

root_axis · 1h ago
Just because you're dumping mountains of garbage onto the internet doesn't mean people are going to read it. Novelty is serotonin for the neurons of the internet, recycled copy-pasta crap will rapidly descend to bottom of the algorithm.
keiferski · 1h ago
Yeah, some people have this model of the internet in which content existing = content being read. The reality is that most internet content already wasn’t being read a decade ago, long before LLMs.

People tend to function more in identity groups, in which the “correct” opinion is learned from a combination of news sources and peers. I don’t think amplifying the content part of that will have much if any effect.

foxglacier · 1h ago
Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs. Are any of those sentences actually misinformation anyway? Wikipedia also says Sweden, Norway and the Netherlands don't have fluoride in their water, so I guess at least half of that one's true. Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?
AdieuToLogic · 1h ago
> Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs.

Hard disagree. Misinformation is a form of lying, a way to manipulate people. This has nothing to do with "political beliefs" and instead is firmly rooted in ethics[0].

> Are any of those sentences actually misinformation anyway?

Yes.

> Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?

This is a nice example of a strawman argument[1] and easily refuted by my citations.

0 - https://en.wikipedia.org/wiki/Ethics

1 - https://en.wikipedia.org/wiki/Straw_man

SecretDreams · 2h ago
> Do you not at all suspect that this overuse of LLMs would push people back towards a more primitive use of the network?

The Internet's enshitification is a lot like entropy. It can only go in one direction, even if the user base would genuinely prefer another.

andsoitis · 1h ago
When was the internet at its highest quality?
jakimfett · 1h ago
Before we plugged it in.
root_axis · 2h ago
I don't buy this idea. The media is overflowing with a cloying overload of crap already far beyond what any individual can manage. The average attention span is already at full saturation, dumping a mass of AI generated slop onto the pile isn't going to change much.

> The only way to control the internet, you see, proved to be to drown it out.

The way to control the internet is to literally control it, like the governments already do.

ghushn3 · 1h ago
Building anything for autocrats probably isn't good for democracy, tbh. If you want democracy to be healthy, you probably want to maximize the amount of wealth of the working class. People who have enough to care for themselves and their families and their communities a little will have enough time and education to meaningfully participate in democracy.

Whether you are building for US autocrats, gulf state autocrats, Russian autocrats, whatever... maybe it's better to not do that? (I know, easier said than done.)

fakedang · 1h ago
Your statements are kinda unrelated? Or what do you mean by participating in democracy, in autocracies?

Ironically, I see a lot more leaning towards dystopian tendencies in the West, mostly the US, as technology advances to the point of singularity (or near singularity, where most low and midskilled jobs are automated away).

Meanwhile these autocratic countries have had strong welfare systems for their citizens and increasingly now their residents, since God knows when, and are well positioned to reap the benefits of an AI boom, given their smaller population sizes.

p0w3n3d · 2h ago
I would say that implication is in the opposite direction. Any sufficiently high-profile technology can turn any government autocratic/totalitarian. It's like a greatest temptation for people in power to have everything under control, which gives birth to horrible country and people governance
seydor · 41m ago
It's not about democracy in the middle east at all. China is more democratic than those countries. It's about containing china s rise as it grows to be a direct rival to the US Western hegemony . That would be an affront to democracy
hamstergene · 4h ago
Remember how the central idea of Orwell's 1984 was that TVs in everyone's home were also watching all time and someone behind that device actually understanding what they see?

That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.

For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.

The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.

I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.

During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?

Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.

While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:

1. convince everyone to have such device in their home for our own good (in progress)

2. intimidate/seize the owning company to use said devices for not our own good (TODO)

Borealid · 4h ago
Classifying a behaviour into either "dangerous" or "not dangerous" is a perfect example of non-generative AI (what was previously called Machine Learning). The output isn't intended to be a textual description, it's a binary yes/no.

You can use an LLM to do that, but a specific ML model trained on the same dataset would likely be better in every quantitative metric and that tech was available long before transformers stepped onto the stage.

koolala · 3h ago
Seems unlikely to me. Would be really creepy to see a chart comparing the accuracy of both methods.

Are there any Natural Language Processing fields today that openly boast about higher performance than LLMs with experimental results? If there was they'd probably be in benchmarks.

rexpop · 3h ago
An LLM is needed to rationalize each unique classifications en masse, and write the warrants.
maxlin · 2h ago
Better do business with UAE and reap the benefits, than let the benefits eventually go to China.

Trying to forever suppress the middle east obviously hasn't worked, so this is just realpolitik with the obvious right choice being what is being done now imho. Saudis are gonna be autocratic in any case, this is just good Hearts of Iron gameplay in real life.

prpl · 6h ago
Not clear to me anyone “in charge” cares in any case. In fact, that may be the point.
twoodfin · 3h ago
Chris Lehane’s Wikipedia page apparently hasn’t caught up with his resume:

https://en.wikipedia.org/wiki/Chris_Lehane

Fixer par excellence!

FilosofumRex · 3h ago
San Altman's lack of scruples, notwithstanding, I find the distinction between democracies (USA, Israel, India) that oppress, occupy and murder hundred of thousands, and autocratic regimes (Saudis, China, N. Korea) who censor, imprison, or execute a few dozens of their opponents, to be a distinction without a difference.
aitchnyu · 11m ago
How does India meet the definition of invader?
huevosabio · 3h ago
It's also weird to call democracies countries that impose their rule over people that have no say in said rule.
Ekaros · 1h ago
Or fail entirely to punish those who destroy democracies in other places... If democracy was a highest value, each time this should result in highest punishment.
eviks · 2h ago
How can you expect to see any difference if you ignore the difference?

> few dozens of their opponents

Why did you ignore the censored/oppressed billions of people living there?

fakedang · 1h ago
I'm pretty sure at least a third of the Chinese (more than the population of the US), aren't looking at the US right now and saying, "Yeah, we want some of that."
hnthrowaway0315 · 3h ago
Basically, advancing any technology for autocrats isn't good for democracy.
vkou · 59m ago
Really, doing anything for autocrats, foreign or domestic isn't good for democracy.
root_axis · 2h ago
I appreciate that people are thinking about these things, but I still can't take the idea seriously that transformers represent a threat to democracy. Maybe with a massive enough supercomputer a country could run an AI IDE capable of end-to-end writing a device driver in Rust - but even that's not a given. Certainly, it's almost meaningless in the face of building our lives around a network of personal surveillance devices that we literally never part with. I'm just saying... ChatGPT is the least of our problems.
ehnto · 2h ago
It might just be a need for some creativity but the risks are pretty clear. The main vector is something that has already played out, which is massive misinformation campaigns across the web.

But I feel like the humans will win that one long term, as bots fill the public web with bitter political rhetoric I think people will retreat to less politicised private communities. I certainly have.

Another angle is as you noted, we basically surrender all our private data to corporations. What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment. If the System has decided you are evil and the System is considered super intelligent, who is allowed to second guess it. Maybe though, evil thoughts are just disagreement with the reigning political party.

root_axis · 1h ago
> The main vector is something that has already played out, which is massive misinformation campaigns across the web.

I wrote this in a few other places, but this is a long foregone state of affairs. People's attention spans are already fully saturated, bloating the internet with a bunch of variations of the same crap isn't going to do anything that isn't already happening today. I don't need to generate a hundred million times what I can simply post once to a hundred million people.

> What if a reigning political party decides that they need to develop an anti-terror model, that scans all communication from all people for Nasty Terror Thoughts then flags them for detainment

This is already possible and happening (e.g. CSAM scanning). The crux of my point is that LLMs really aren't that big of a deal compared to the panopticon society that we've already built. The agents of the authoritarian control platform aren't going to become 10x more spooky because they installed a language model plugin.

jasonfarnon · 2h ago
Well do you think that social media could be a threat to democracy, as many have asserted, e.g., cambridge analytica, the US disinformation bureau etc? (Likewise that social media could be a threat to autocracy?) You don't see LLMs' impact on culture and society being at least as broad and thorough?
root_axis · 1h ago
> Well do you think that social media could be a threat to democracy

Yes, I certainly agree that it is and recognize all your examples.

> You don't see LLMs' impact on culture and society being at least as broad and thorough?

In the sense I think you're implying, I see them as having almost zero impact. Just because more crap is generated doesn't mean it's going to be more believable, or that it will even be seen. How many tokens do you think it would take a SOTA model to convince you that the earth is flat or that the moon-landing was a hoax? Do you think Trump supporters will start voting for Democrats if they see 100 anti-Trump posts in every comment section on the internet? The LLM isn't going to generate anything that we haven't all heard already.

jeffbee · 5h ago
I have a question. In what sense is OpenAI going to assist UAE in building large-scale data centers suitable to machine learning workloads? Do they have experience and expertise doing that?
fakedang · 1h ago
The UAE already host AWS and Azure datacenters. G42, the semi governmental company which Open AI has tied up with, has a shit ton of imported expertise in the space. They have even developed their own LLM.
jeffbee · 1h ago
Right. This is my point. What is OpenAI bringing to this deal?
martin-t · 6h ago
The biggest danger of AI isn't that it will revolt but that it'll allow dictators and other totalitarians complete control over the population.

And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.

The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.

noident · 6h ago
> A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.)

Is this like a sufficiently smart compiler? :)

Stylometry is well-studied. You'll be happy to know that it is only practical when there are few suspect authors for a post and each author has a significant amount of text to sample. So, tying a pseudonymous post back to an author where anyone and everybody is a potential suspect is totally infeasible in the vast majority of cases. In the few cases where it is practical, it only creates a weak signal for further investigation at best.

You might enjoy the paper Adversarial Stylometry: Circumventing Authorship Recognition to Preserve Privacy and Anonymity by Greenstadt et al.

sitkack · 6h ago
Someone did a stylometry attack against hn awhile ago, it would with very high confidence unmask alt accounts on this site. It worked. There is zero reason to believe that it couldn't be applied on a grand scale.
noident · 6h ago
That sounds considerably more narrow than what the GP described.

What if I don't have an alternate HN account? Or what if I do have one, but it has barely any posts? How can you tie this account back to my identity?

Stylometry.net is down now, so it's hard to make any arguments about its effectiveness. There are fundamental limitations in the amount of information your writing style reveals.

FilosofumRex · 3h ago
Ted Kaczynski, the Unabomber, was identified from his writing style

https://www.fbi.gov/history/famous-cases/unabomber

BriggyDwiggs42 · 3h ago
By his brother who he wrote loads of letters to, and with whom he discussed a lot of his ideological views. Very bad example.
wombatpm · 2h ago
It was a 28,000 word manifesto
danaris · 4h ago
So, it found some alts.

How do you know it didn't miss 10x more than it found? Like, that's almost definitionally unprovable.

jasonfarnon · 2h ago
I don't know about the original post being referenced, but the ground truth could be known to good approximation. I assume HN has a good idea of which accounts are the same person using IPs, and could actually evaluate the false negative rate.
AStonesThrow · 6h ago
How do y’all prove it worked, O Privacy Wonks?

How do y’all establish ye Theory Of Stylometry, O Phrenology Majors?

O, @dang confirms it on Mastodon or something??

squeeeeeeem · 5h ago
It found 11 of my alt accounts with 90% accuracy. Now I make a new alt every post despite HN rules. sorry, but it is essential to do so.
nixgeek · 3h ago
Adding an extra “e” for each doesn’t create much extra mystery requiring stylometry to unmask you.

More seriously, why is this essential?

schoen · 3h ago
> Adding an extra “e” for each doesn’t create much extra mystery requiring stylometry to unmask you.

That language could be recognized by a deterministic finite automaton!

AStonesThrow · 2h ago
How many "r"s are in "strawberrry"?
fooker · 5h ago
> that it is only practical

You're missing the point, it doesn't have to be practical, only the illusion of it working is good enough.

And if authoritarian governments believe it works well enough, they are happy to let a decent fraction of false positives fall through the cracks.

See for example, polygraph tests being used in court.

heylook · 3h ago
Exactly. The President of the United States held up a terribly photoshopped picture of some tattoos and claimed it clearly showed membership in MS13. Half the country immediately decided that was good enough proof for extrajudicial rendition. Authoritarians only care about a thin enough veneer of evidence to give them just enough cover with just enough people to get away with what they want and then move onto the next thing.
dfxm12 · 6h ago
determine their views and opinions with high accuracy

The truth, accuracy doesn't matter to authoritarians. It doesn't matter to Trump, clearly, people are being sent away with zero evidence, sometimes without formal charges. That's the point of authoritarianism. The leader just does as he wishes. AI is not enabling Trump, the feckless system of checks and balances is. Similarly, W lied about wmd's, to get us into an endless war. It doesn't matter that this reason wasn't truthful. He got away with it and enriched himself and defense contractor buddies at the expense of the American people.

Terr_ · 5h ago
Right: They often don't care about accuracy, only plausibility they can pick and choose from.
barbazoo · 6h ago
throwanem · 6h ago
SecretDreams · 2h ago
Well said. It's just a new lever to embolden the same shitty people and behaviours that keep tending to run society into the ground - but at a much larger scale.
blackoil · 4h ago
Listen, you can get one of these local LLMs - and let me tell you, some of them are tremendous, really tremendous - to write exactly like Trump. It's incredible, actually. People will come up to you all the time and they'll say, 'Sir, how do you do it? How do you write so beautifully?' And now, with these artificial intelligence things - which, by the way, I was talking about AI before anyone even knew what it was - you can have them copy his style perfectly. Amazing technology, really amazing.
fc417fc802 · 4h ago
The right to locally maintain fully private AI shall not be infringed ... ?
tabarnacle · 6h ago
Sure.. for folks who don’t worry about anonymity when sharing online. For those who prioritize anonymity, I’m doubtful.
throwanem · 6h ago
So am I. They would be among the first and most quietly vanished in this scenario, being trivially identifiable from a God's-eye view.
fooker · 5h ago
You can identify with a decent amount of confidence whether two paragraphs of text were written by the same person.
exiguus · 6h ago
I'm not entirely convinced that nations will play as significant a role in the coming decades as they have historically. Currently, we observe a trend where affluent individuals are increasingly consolidating power, a phenomenon that is becoming more apparent in the public sphere. Notably, these individuals are also at the forefront of owning and controlling advancements in artificial intelligence. Coincidentally, this trend is often referred to as 'tech fascism,' bringing us back to the dictator schema.
heylook · 3h ago
Focusing on "nations" specifically is a waste of effort. "Power structures" generically are enough. It doesn't matter whether it's technofascist fiefs, nation-states, the Illuminati, or an up-jumped HOA.
throwanem · 6h ago
States haven't always been a major feature of power. But we've never seen the interaction of personal power with modern weaponry, by which I do not mean nukes. When it was just a question of which debauched noble could afford more thugs or better assassins, sure. But 'how many Abrams has the Doge?'
exiguus · 6h ago
>But 'how many Abrams has the Doge?'

As many as you can control with signal chat.

Besides, I'm not sure if tanks like the Abrams are as important anymore. Nowadays, things like food and water really matter. For example, exporting corn is crucial. Also, having the soils needed to make modern tech, like chips and batteries, is super important. Therefore Greenland is.

throwanem · 1h ago
Show me the war where resources weren't important. But you fail to take my meaning, which is that of the two, bureaucracy is far more robust than aristocracy, because power held by aristocracy succeeds by lineage where that by bureaucracy does so by role - that is, in a fashion far less constrained by the available resource of satisfactory personnel.

Every billionaire in function constitutes an aristocracy of one. Meanwhile, states have armed forces. A billionaire, in relying on money for power, implicitly also depends on continued access to the global financial system which gives money meaning, in order to exercise that power. States are not obligated to allow such access, and may easily prevent it in the limiting case by ensuring such trade comes with a side of explosives delivered at speed, which broadly suffices to deter desirable counterparties.

Think what anyone may of such a thing, the fact is having an army or navy or air force means you can do it. Which billionaire has one of those?

nerdsniper · 6h ago
Across history, often the “state” is/was really just a kind of collective umbrella organization to help manage the interests of the powerful.
exiguus · 6h ago
I agree. Initially, this power was embodied by monarchs who claimed divine right, such as god-given kings. Over time, the influence shifted towards corporations that wielded significant economic and political control. Today, it is often the super-rich individuals who hold substantial sway over both economic and political landscapes.
Mountain_Skies · 4h ago
Governments remain the owners of significant weaponry and willingness to kill on a large scale. The tech world has empowered authoritarians, usually to the cheers of the ideologically aligned, but modern tech systems are as incredibly fragile as they are powerful.
kochbeck · 48m ago
I’m just going to leave this here: https://en.m.wikipedia.org/wiki/IBM_and_the_Holocaust
yyyk · 4h ago
Welcome to the future. Increasing technical progress makes the common person much less relevant, and political power will flow to elites as a result.
aussieguy1234 · 2h ago
- Dictator builds AI to help control the humans - AI realizes it doesn't need the dictator anymore - AI is now the dictator
AtlasBarfed · 4h ago
This is the great filter upon us more than anything else, even nuclear armageddon.

Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.

Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.

With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.

Oh did I forget something? Oh right. Drones! Drones everywhere.

Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.

So ... Yeah.

cookiengineer · 1h ago
So ... what do we do to fix it?
ImPostingOnHN · 54m ago
person falling down a huge cliff and hitting all the rocks on the way down: "how do we fix this?"

I dunno, turn back time?

timewizard · 4h ago
We can generate noise. Garbage data. Huge amounts of it. The asymmetry of this tactic is massively in our favor.
bix6 · 3h ago
Can you? How do you spoof your bank accounts?
komali2 · 3h ago
Neal Stephenson seems prophetic in this sense. In "Fall," someone does exactly that - leverage what nowadays we'd call LLMs but micro ones deployable as a massive botnet, that take a given topic and vomit out unfilterable torrents of garbage about it. In the novel the POC for it is to divide the world into those that believe a small town in Utah was nuked, and those that don't.

The deployment completely destroys the internet as well as a large swath of American sovereignty in its own borders, as a portion of the population becomes AI-addled ungovernable jihadists that spend half their time drooling over AI generated images and the other half crucifying heretics.

reverius42 · 1h ago
> a portion of the population becomes AI-addled ungovernable jihadists that spend half their time drooling over AI generated images and the other half crucifying heretics.

That almost sounds like it describes current reality.

antithesizer · 5h ago
If democracy builds supercomputers (and bombs, propaganda, prisons) for autocrats, of what good is democracy? The evidence points strongly to democracy and autocracy being friends, even "good cop, bad cop"
zelphirkalt · 5h ago
Or is it rather, that there are few well working democracies and most are infiltrated by autocrats at least to some degree?
Kapura · 5h ago
the ultra-wealthy in western democracies understand they have much more in common with the ruling autocrats than the average citizen of a democracy (the motherfuckers keep voting for taxes!)
arcanus · 6h ago
I do not find her critique of argument #2 compelling [1]. Monetization of AI is key to economic growth. She's focused on the democratic aspects of AI, which frankly aren't pertinent. The real "race" in AI is between economic and financial forces, with huge infrastructure investments requiring a massive return on investment to justify the expense. From this perspective, increasing the customer base and revenue of the company is the objective. Without this success, investment in AI will drop, and with it, company valuations.

The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.

[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'

bgwalter · 6h ago
The U.S. may be a nominal democracy, but the governed have no influence over the oligarchy. For example, they will not be able to stop "AI" even though large corporations steal their output and try to make their jobs obsolete or more boring.

Real improvements are achieved in the real world, and building more houses or high speed trains does not require "AI". "AI" will just ruin the last remaining attractive jobs, and China can win that race if they want to, which isn't clear yet at all. They might be more prudent and let the West reduce its collective IQ by taking instructions from computers hosted by mega corporations.

gsf_emergency · 4h ago
>anything that helps the US “win”

That is, "the ends justifies the means"? Yep, seems like we are already at war. What happened to the project of adapting nonzero sum games to reality??

credit_guy · 6h ago
Maybe.

I would not do business with Kim Jong Un. He is murdering a lot of his own people. Or with Putin. He is murdering a lot of Ukrainians.

But guess what: both North Korea and Russia are under sanctions. You can't do business with them anyway.

But the UAE is not under sanctions. Which means that in the opinion of the US Government it is ok to do business with them. Then who is Open AI to say otherwise? Why should it be any of their concern to determine who is a good guy or a bad guy in the world? Shouldn't there be a division of responsibilities? Let the Department of State determine who is good and who is bad, and let companies do business with those who are not on the sanctions list.

anthonymartinez · 6h ago
for a while the Pinochet regime was our perfectly acceptable ally in Chile, even though we knew he was a mass murderer. its silly to throw up your hands just because the state department (itself not exactly a bastion of morality) says that its not illegal to do business with someone.
credit_guy · 6h ago
Let me guess: you were against Bush's war in Iraq to take down Saddam. Why? Wasn't it moral to try to eliminate a known mass murderer?

Either is our duty to be the the moral arbiters of the world or it isn't. Which one is it?

jamroom · 6h ago
We didn’t go in to Irag because Saddam was a mass murderer - we went in because Bush lied to America that Saddam was trying to get yellow cake uranium to build a bomb. A lot of Americans were against the war because we knew Saddam was not involved in 9/11 but Bush jr wanted to finish what his father couldn’t in the first gulf war. Honestly I would love it if we cared enough about mass murderers to actually go in and help, but I just don’t see that being a reason.
Spooky23 · 6h ago
Obviously that’s not true. Pinochet and Saddam were both direct product of US policy and intervention.

At the, end, Saddam ultimately pulled too hard on the leash and miscalculated his power. Murder, mass or otherwise and morality has little bearing on matters of empire.

Thinking otherwise is naive.

jfengel · 6h ago
We tolerate quite a few mass murderers in charge of countries. We attacked that one because, supposedly, he had the tools and intent to attack the United States with chemical weapons.

Many were opposed to that war, not because they didn't feel it was right to eliminate a mass murderer, but because that was not the stayed reason. The stated reason in fact turned out to be false, and was arguably an abject lie.

In other words ... it's not a great example of what you're trying to claim.

bdangubic · 5h ago
We tolerate quite a few mass murderers in charge of countries

including our own…

sitkack · 6h ago
One can be against a war and at the same time be against the government that war would remove. We killed hundreds of thousands of Iraqis, many of those troops were conscripts who didn't want to be there, yet we bulldozed them into the sand to suffocate, or burned them alive on the highway while they retreated.

There are more than two answers to everything.

> Wasn't it moral to try to eliminate a known mass murderer?

Given the context and the means. No.

xkcd1963 · 4h ago
US was never a moral arbiter, isn't and will never be. Plenty of spaces would have been better of without US involvement
ImPostingOnHN · 6h ago
> Why should it be any of their concern to determine who is a good guy or a bad guy in the world?

Because helping someone do something bad is itself bad.

> Shouldn't there be a division of responsibilities?

It sounds like you mean an abdication of responsibility? We are already responsible for our own choices and actions, as well as their effects.

credit_guy · 5h ago
No, I'm not talking about abdication of responsibility. I'm talking about modesty. It is very appealing to think you know better than other people. That we know how a society should be governed, and we are able to label another country as totalitarian, or undemocratic, or illiberal, or such. But looking around the world, you can see that a lot of evil is perpetrated exactly by people who think they know better than everyone else. Osama bin Laden himself thought that what he was doing was for the advancement of good over evil, and a lot of his followers thought the same.

A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE. I personally have no idea about what's going on over there. But let's say that I'm in charge of the business decisions at Open AI. Should I start thinking that I know a way to solve their problems, and part of that way is for my company to apply some form of AI embargo on them? Or should I simply know my limitations, and restrict my judgment to the matters I am familiar with.

"Abdication of responsibility". What grand words. Why exactly has Open AI a responsibility to guide the UAE towards a better future? And, more importantly, why should Open AI feel confident that they know what is better for the UAE?

ImPostingOnHN · 1h ago
> I'm talking about modesty.

And I am talking about not doing bad things, which includes helping others do bad things.

> It is very appealing to think you know better than other people.

Not really. Responsibility, principles, and morality are work. It's far more appealing to claim they are someone else's job, dispense with them, turn your brain off, and do whatever feels good.

> A lot of the people reading Hacker News right now think they have a better solution for the societal problems of the UAE.

With all due respect, the percentage of HackerNews readers who are thinking about the UAE right now can be counted on one hand. This discussion is about generalities: Don't help people do bad stuff.

You're arguing about what's bad, and what isn't, and who decides. Interesting topic (perhaps for someone else or for another time)! Would you help someone do something which you think is bad?

cyberax · 4h ago
> Why exactly has Open AI a responsibility to guide the UAE towards a better future?

OpenAI is not responsible for UAE government. However, it's responsible for its own actions, and for their easily predictable consequences.

avazhi · 1h ago
Democracy also isn’t good for democracy. Safe to say it’s a failed idea.
npn · 2h ago
I'm just here to remind all of you that US democracy is a terror for the whole world.

You have already killed ten millions people and you still do not have enough?

How much bloodthirsty you are?