AI surveillance should be banned while there is still time

547 mustaphah 203 9/6/2025, 1:52:34 PM gabrielweinberg.com ↗

Comments (203)

giancarlostoro · 18h ago
I stopped using Facebook because I saw a video of a little Australian girl maybe 7 years old age wise holding a spider bigger than her face in her hand. I wrote the most internet meme comment I could think of “girl let go of that spider, we gotta set the house on fire” hit the button to post, only it did not post, it gave me an account strike. At the time I was the only developer at my employer who managed our Facebook app integration, so I appealed it, but another AI immediately denied my appeal, or maybe a really fast human idk but they sure didnt know meme culture.

I outrifht stopped using Facebook.

We are doomed if AI is allowed to punish us.

Frost1x · 16h ago
The underlying issue here isn’t AI based policing, it’s the fact private entities have enough unregulated influence on peoples’ daily life that their use of these or any such policy mechanisms are undemocratically effecting people in notably significant ways. The Facebook example is, whatever, but what if it’s some landlord renting making a decision, some health insurance company deciding your coverage, etc.

Now obviously this won’t stop with private entities, state and federal law enforcement are gung-ho to leverage any of these sorts of systems and have been for ages. It doesn’t help the current direction the US specifically is moving in, promoting such authoritarian policies.

lumost · 16h ago
We already live in this world for health insurance. The ai can make plausible sounding denials which a doctor can rubber stamp. You have no ability to sue the doctor for malpractice, you cannot appeal the decision.

Medical insurance is quickly becoming a simple scam where you are forced to pay a private entity that refuses to ever perform its function.

immibis · 5h ago
Worth noting this isn't hypothetical. There was a story a while back where a health insurance company would hire real doctors to sit at computers all day clicking "accept AI resolution" over and over (they were fired if they rejected AI resolutions) because the law required that.
olddustytrail · 13h ago
Most first world countries don't have this. It's not a given.
azemetre · 12h ago
The US is usually a hot bed of experimentation in corporate malfeasance.
abustamam · 8h ago
As an American, it's funny how ahead and "first world" the US can be in some things, but how backwards and "developing country" the US can be in other things.

Medicine itself is very first-world. But medical insurance is one of those "worse than developing country" things. The fact that Americans need medical insurance at all is appalling to many countries, first world and otherwise.

And of course, by funny I mean "I can only laugh otherwise I'd cry"

gregoryl · 7h ago
Which things are the US ahead in?
abustamam · 6h ago
Good question. Technology, for one. Is it the first in technology? Probably not. But when comparing first world countries with developing countries, technology is where the US's economic output is.

And also military, though I'm not sure if that's something to be proud of.

philipallstar · 4h ago
Immigration
giancarlostoro · 10h ago
I mean, I no longer work at this place.. and I have no idea what % of customers used Facebook to login to their accounts, but I'm sure someone would have been mad they couldn't get the famous butter biscuits reward if I had gotten banned, and Facebook had proceeded to ban our FB app. ;)
Ray20 · 13h ago
> what if it’s some landlord renting making a decision, some health insurance company deciding your coverage, etc.

Then you simply use the services of another private company. Here, in fact, there are no particular dangers, after all, private companies provide services to people because it is profitable for private companies.

BiteCode_dev · 12h ago
This works only if none of those are true:

- There is real competition. It's less and less the case for many important things, such as food, accommodations, health, etc.

- Companies pay a price for misbehaving that is much higher than what they got from misbehaving. Also less and less the case, thanks to lobbying, huge law firms, corruption, etc.

- The cost of switching is fair. Moving to another places is very expensive. Doing it several times in a row is rarely possible for most people.

- Some practice are not just generalized in the whole industry. In IT tracking is, spying is, and preventing you from managing your device yourself is more and more trendy.

Basically, this view you are presenting is increasingly naive and even dangerous for any citizen practicing it.

terribleperson · 16h ago
I'm suspended on reddit for, as best as I can tell, posting a DFhack bug fix script. For the uninitiated, this is a bug fix script for a program commonly used in Dwarf Fortress moderating, not anything illicit.

If this kind of low-quality AI moderation is the future, I'm not sure if these major platforms will even remain usable.

kbaker · 12h ago
Lol. I got perma-banned for violating rules under my alt accounts.

But I don't have any alt accounts...??? Appeal process is a joke. I just opted to delete my 12 year old account instead and have stopped going there.

Oh well, probably time for them to go under and be reborn anyways. The default subs and front page has been garbage for some time.

randycupertino · 11h ago
I'm permanently banned from Reddit for calling a "power mod" of SF Bay Area a nimby when he was complaining about increased traffic in his neighborhood and that they had to put a stop sign up at a major intersection.

They IP and hardward device banned me, it's crazy! Any appeal auto-rejected and can't make new accounts.

cmxch · 10h ago
This is where someone does need to buy up Reddit to clean that out.
terribleperson · 8h ago
...is nimby a dirty word now or something?
immibis · 5h ago
Reddit has been garbage since the effective private equity takeover.

I'm not sure if they actually got taken over by private equity, but they acted like it since about a year before the third-party app tantrum.

busymom0 · 14h ago
Meanwhile the "new" Digg reboot plans on using AI moderators too...
terribleperson · 13h ago
The thing that annoys me is that I could see value in AI moderation. Instead of scanning every post with AI with overly-broad criteria (and probably lower-power models), use AI to prescreen reports and determine whether they're worth surfacing to a human. IT could also be used to put temporary holds on material that's potentially criminal or just way over the line, but those holds should go to the very front of the human review queue to either be lifted or the content deleted.

Real moderation actions should not be taken without human input and should always be appealable, even if the appeal is just another mod looking at it to see if they agree.

floundy · 16h ago
That's hilarious given that Reddit is utterly overrun with blatant, low-quality LLM accounts using ChatGPT to post comments and gain karma, and several of the "text stories" on the front page from subs like AITA are blatant AI slop that the users (or other bots?) are eating up.

I suspect sites like Reddit don't care about a few% false positive rate, without considering in context that bot farmers literally do not care, they'll make another free account, but genuine users will have their attitude towards the site turn significantly negative when they're falsely actioned.

Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.

terribleperson · 15h ago
I don't know if the day of reckoning will be anytime soon. I think a lot of major advertising firms are aware that they're mostly serving to bots, but if they tell their customers that, they don't get paid.

edit: This has definitely soured my already poor opinion of reddit. I mostly post there about video games, or to help people in /r/buildapc or /r/askculinary. I think I'd rather help people somewhere I'm not going to get blackholed because an AI misinterpreted my comments.

viccis · 15h ago
>That's hilarious given that Reddit is utterly overrun with blatant, low-quality LLM accounts using ChatGPT to post comments and gain karma, and several of the "text stories" on the front page from subs like AITA are blatant AI slop that the users (or other bots?) are eating up.

Check out this post [1] in which the post includes part of the LLM response ("This kind of story involves classic AITA themes: family drama, boundary-setting, and a “big event” setting, which typically generates a lot of engagement and differing opinions.") and almost no commenter points this out. Hilarious if it weren't so bleak.

1: https://www.rareddit.com/r/AITAH/comments/1ft3bt6/aita_for_n... (using rareddit because it was eventually deleted)

threeducks · 14h ago
Over the past two years, I have also seen many similar stories where the majority of users were unable to recognize that these stories were AI-generated. I fear for the future of democracy if the masses are so easily deceived. Does anyone have any good ideas on how to counteract this?
utyop22 · 12h ago
Literacy rates have been falling off a cliff for decades.

If theres no literacy, there is no critical thinking.

The only solution is to deliver high quality education to all folks and create engaging environments for it to be delivered.

Ultimately it comes down to influencing folks to think deeper about whats going on around them.

Most of the people between the age of 13-30ish right now are kinda screwed and pretty much a write off imo.

heavyset_go · 7h ago
> Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.

No it won't, we'll all have to upload our IDs and videos of our faces just to register or use Reddit or any social media. They will know who is a real monetizable user or not.

trod1234 · 16h ago
Its not just Reddit, this is happening on all networked platforms. FB, X, HN, Reddit, they are all in the exact same boat, and some are quite worse than Reddit.

This is surreptitious jamming of communications at levels that constitute and exceed thresholds for consideration as irregular warfare.

Genuine users no longer matter, only the user counts which are programmatically driven to distort reflected appraisal. The users are repressed and demoralized because of such false actions, and the platform has no solution because regulation failed to act at a time they could have changed these outcomes.

What comes later will simply be comparable to why "triage" is done on the battlefield.

Adtech is just a gloriously indirect means for money laundering in fiat money-printing environments. Credit/Debt being offers, when it is unbacked without proper reserve is money-printing.

trod1234 · 16h ago
I'm banned from a few subreddits for correctly pointing out that ricing is not a pejorative, and the history of the culture that led to extreme customization.

You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.

For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.

Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.

Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.

The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.

Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.

All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).

noah_buddy · 15h ago
I thought you were going to point out distinct etymology, but these terms do seem linked, no? Not surprising that the shared lineage confers shared problems.
navane · 3h ago
I had to Google it, but apparently it comes from cars, where a riced car has stripes and spoilers and such to make it look like a race car.

One source claimed rice was race inspired custum e-forgot, the other did claim a link with asian street racing.

trod1234 · 14h ago
The two are unconnected, one is used as a pejorative which is racist, the other isn't. This is not a hard distinction to make if you aren't a bot.

<Victim> "I'm ricing my Linux Shell, check it out." <Bot> That's Racist!

<Bot Brigade> Moderator this person is violating your rules and being racist!

<Moderator> I'm just using AI to determine this. <Bot Brigade> Great! Now they can't contribute. Lets find another.

TL;DR Words have specific meanings, and a growing number of words have been corrupted purposefully to prevent communication, and by extension limit communication to the detriment of all. You get the same ultimate outcomes when people do this as any other false claim. Abuses pile up until eventually in the absence of functioning non-violent conflict resolution; violence forces the system to reform.

Have you noticed that your implication is circular based on the indefinite assumption (foregone conclusion) that the two are linked (tightly coupled)?

You use a lot of ambiguous manipulative language and structure. Doing that makes any reasonable person think you are either a bot, or a malicious actor.

bitwize · 15h ago
Per the Linux Foundation it is inappropriate to speak of programs as "hung" due to the political and social history associated with hanging. What makes you think "ricing" would be considered acceptable language in today's context?
ants_everywhere · 15h ago
huh, the past tense of the violent "hang" is "hanged" not "hung". https://www.thefreedictionary.com/hung

"hung" means to "suspend", so the process is suspended

ChrisMarshallNY · 9h ago
It also means … ahem … a well-endowed gentleman.

I’m not sure that AI would necessarily make that mistake, but a semilterate mod, very much could.

I think the real issue is the absolute impossibility of appeal. This is a big problem, for outfits like Google or Microsoft, where stories of businesses being shut down for false positive bans are fairly common.

In my experience, on the other hand, Apple has always been responsive to appeal. I have released a lot of apps, and have had fairly frequent rejections. The process is annoying, because they seldom tell you exactly what caused the rejection, but I usually figure it out, after one or two exchanges. They are almost always word-choice issues, but not bad words. Rather, they don’t like stuff that can step on their brands.

I once had an app rejected, because it had the word “Finder” in its name (it was an app for finding things).

The annoying thing, was that the first rejection said it was because it was a simple re-skinning of a Web site. I’m positive that what happened, was that a human assessor accidentally tapped the wrong button on their dashboard.

blibble · 15h ago
I don't remember voting the Linux Foundation into power as global word police
trod1234 · 14h ago
I also don't remember voting to allow doublespeak to be the dominant form of definition.
wkat4242 · 8h ago
The Linux Foundation is a bunch of business suits trying to tell the grassroots FOSS community to follow corporate interests. I don't take them seriously.

Just look at their list of directors. It's the fortune 500 right there.

skipants · 14h ago
I've already had a pre-screen coding tests judged by AI. It's unsettling.

I also had a CV rejection letter with AI rejection reasons in it as well which was frustrating because none of the reasons matched my CV at all, in my opinion. I am still not sure if the resume was actually reviewed by a human or AI but I am assuming the latter.

I absolutely hated looking for a new job pre-AI and when times were good. Now I'm feeling completely disillusioned with the whole process.

j45 · 2h ago
Comment that code to guide it :)
dude250711 · 13h ago
They probably generated a generic AI response once and just copy-pasted it, thus saving their time twice. They did not do one scummy thing, these are just overall scummy people.
iammrpayments · 16h ago
The way facebook uses automation to manage their products is a disgrace, they can’t even manage to keep themselves automatically banning a lawyer who happens to be called Mark Zuckerberg: https://www.huffpost.com/entry/mark-zuckerberg-lawsuit-imper...

If you advertise on facebook you’re almost guarantee to have your ad account restricted for no apparent reason and no human being to appeal to, even if you spend big money.

It’s so bad that is common knowledge that you should start a fan page, post random stuff and buy page likes for 5-7 days before you start advertising, otherwise their system will just flag your account.

hdgvhicv · 16h ago
Yet oddly they never seem to be able to stop scam adverts.
azemetre · 12h ago
Those are their best customers, they spend money and ask no questions.
tomrod · 15h ago
I've tried to open an account for while now, consistently blocked. At this point I have to assume it's stock is a scam.
philipallstar · 18h ago
Maybe it banned you because you used a comma splice[0].

[0] https://en.m.wikipedia.org/wiki/Comma_splice

mmaunder · 17h ago
Burn the witch.
gcanyon · 16h ago
damn, you smoked them, please stop, they've had enough, don't be so, cruel
andrewflnr · 12h ago
> don't be so, cruel

We've got the real criminal right here.

wkat4242 · 8h ago
Facebook is mostly using AI now. Early this year they closed their entire moderation office in our city and fired the thousands of people that worked there. There was no training handover to other teams like what was the case before when a part was moved to a lower wage country.
zouhair · 10h ago
The problem is how it is easy for AI with full surveillance to shape one's opinions. We are mostly doomed as a species.
morkalork · 16h ago
Not using Facebook doesn't help either though. My new co-worker didn't have one but needed it for his job so he made an account and was immediately flagged for suspicious activity.
immibis · 1h ago
One person got their company banned from Google by getting themselves banned from Google and having Google think the company account was an alt. The company made phone apps.
smcin · 15h ago
What did he do that was deemed suspicious? Send a large number of friend requests very quickly after joining (<24hrs? 1wk?)? Follow requests? Upvotes? Log on from multiple devices in multiple locations? Put third-party links in his bio or profile?

(LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago. Sadly they are necessary. Near-impossible for heuristics to distinguish between real humans with inauthentic or suspiciously commercial behavior.)

junon · 15h ago
I had the same thing happen on both Facebook and Twitter. The answer is: nothing.

In both cases for me, I had signed up and logged in for the first time, and was met with an immediate ban. No rhyme or reason why.

I, too, needed it for work so had no prior history from my IPs in the case of Facebook at least. So maybe that's why, but still. Very aggressive and annoying blocking algorithm behavior like that cost them some advertising money as we just decided it wasn't worth it to even advertise there.

renewiltord · 5h ago
If they suspect you're trying to ban evade they'll ban your account but their ban evasion detection is trigger happy.
malfist · 14h ago
That's a lot of victim blaming in your post. Without any evidence to support it
smcin · 13h ago
Don't talk nonsense like "victim-blaming" or outrage-trolling; we're not on Twitter; I gave the OP useful actionable constructive advice (based on multiple friends' of mine experience, that in some cases took me hours to debug) about what (often innocent) behaviors can tend to trigger anti-authentic heuristics on a legit user's account, esp. a very new one (<24h or <7d old). (For obvious reasons social sites won't tell you the heuristics, and they vary them often, and almost surely combine other information on that device, type, IMEI, IP address, subnet, other sites' activities by that user, etc.) Ok?

Nowhere did I justify social sites gtting things wrong or not having better customer support to fix cases like this.

Also the good-faith translation of "Without any evidence to support it" is "How/where did you find that out?", but even at that I had already given you some evidence: "LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago." Ask good-faith questions instead of misrepresenting people. If you actually read my other HN posts you won't see me condoning large social-media sites behaving badly.

toasted-subs · 16h ago
Yeah I've basically refused to ever have kid's because I cant imagine them growing up knowingly becoming slaves. As a parent I'd basically expected CPS to be called on me for having kid's in this environment.
junon · 15h ago
What in the world are you talking about?
tomrod · 15h ago
If it wasn't Ai slop, it seems close.
alphazard · 20h ago
I expect we will continue to see the big AI companies pushing for privacy protections. Sam Altman made a comparison to attorney-client privilege in an interview. There is a significant hold out to using these things as fully trusted personal assistants or personal knowledge bases because of the lack of privacy.

The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.

dataviz1000 · 19h ago
> but that goes against the business model.

Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.

alphazard · 19h ago
Local models do make a lot of sense (especially for Apple), but it's tough to figure out a business model that would cause a company like OpenAI to distribute weights they worked so hard to train.

Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.

ronsor · 17h ago
> Getting customers to pay for the weights would be entirely dependent on copyright law

That's assuming weights are even covered by copyright law, and I have a feeling they are not in the US, since they aren't really a "work of authorship"

dataviz1000 · 19h ago
> Getting customers to pay for the weights

Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.

novok · 19h ago
That is functionally much harder to pull off than software because model weights are essentially more like raw media files than code, and that is much easier to convert to another runtime
firesteelrain · 19h ago
Codeium had an airgap solution until they were in talks with OpenAI and pulled it back. It worked on prem and they even told you what hardware to buy
novok · 16h ago
You can still extract the model weights from an on-prem machine. It has all the same problems of media DRM, and large enterprises do not accept unknown recording and surveillance that they cannot control
firesteelrain · 15h ago
I am not sure what you mean. I work at a large Enterprise and we did not unleash it on our baseline and it couldn’t phone home but it was really good for writing unit tests. That sped things up for us.
esseph · 18h ago
There is no moat.
Juliate · 19h ago
> it's tough to figure out a business model that would cause a company like OpenAI to distribute weights they worked so hard to train.

It sounds a lot like the browsers war, where the winning strategy had been to aggressively push (for free, which was rather uncommon then) one's platform, in the aim of market dominance for later benefits.

Wowfunhappy · 19h ago
Notably, Apple is pushing for local models, albeit not open ones and with very limited success.
utyop22 · 19h ago
Apple will eventually figure it out. Remember the iPhone took 5 years to develop - they don’t rush this stuff.
kjkjadksj · 17h ago
Why do that when openai might pay you billions to be the primary ai model on your system?
username332211 · 19h ago
> Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe.

They could take a lesson from churches. If LLM providers and their employees were willing to commit to privacy and were willing to sacrifice their wealth and liberty for the sake of their clients, society would yield.

I remember seeing a video of a certain Richard Masten, a CrimeStoppers coordinator, destroying the information he had on a confidential source right in the courtroom under the threat of a contempt charge and getting away with a slap on the wrist.

In decent societies standing up for principles does work.

socalgal2 · 14h ago
> Sam Altman made a comparison to attorney-client privilege in an interview

Isn't his company, OpenAI, the one that said the monitor all communications and will report anyone they think is a threat to the government?

https://openai.com/index/helping-people-when-they-need-it-mo...

> If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

I get they are trying to do something positive overall. At the same time. I don't want corp owned AI that's monitoring everything I ask it.

IIRC it is illegal for the phone company to monitor and censor communications. The government can ask a judge for permission for police to monitor a line but otherwise it's illegal. But now with AI transcription it won't be long until a company can monitor every call, transcribe it, feed to an LLM to judge and decide which lists you should be on.

felipeerias · 11h ago
But there isn’t a person on the other side whom you are reaching through their service. The only communication is between you and the OpenAI server that takes in your input message and produces an output.

I understand that people assume LLMs are private but there isn’t any guarantee that is the case, specially when law enforcement comes knocking.

sensanaty · 3h ago
Sam "Harvest Your Biometric Data For A Scamcoin" Altman? Real trustworthy bloke, I'm sure. We should all buy some worldcoin by giving him our eye scans, in the name of privacy of course.
Melting_Harps · 1h ago
> Real trustworthy bloke

You do realize he became King maker by position himself in YC, who is the owner/operator of Hackernews. What makes you think you are not being traced here, and your messages are not being used to train his LLM?

As far as him being a conman, if you haven't realized that most of the SV elite, that this place worships, are all conmen (See Trump Dinner this week) with clear ties to the intelligence agency (see newly appointed generals who are C-suite in several Mag 7 corps) who will placate a fascist in order to push their agenda(s) then you simply aren't paying attention.

His scam coin is the most insipid of his rap sheet at this point, and I say this as a person who has seen all kind of grifting in that space.

j45 · 2h ago
I wonder if pushing for privacy protections is due to seeing such concerns of intrusion on privacy already on the horizon.
floundy · 16h ago
The tech companies have wised up and they'll continue to speak idyllically about what "should be" and maybe even deploy watered-down versions of it, but really they're just buying time to where they can get even bigger and capture more power before the government even thinks of stepping in. The nice thing about being first to market is you can abuse the market, abuse customers, pay a few trivial class action lawsuits along the way, then when regulations finally lag along you've got hundreds of billions worth of market power behind you to bribe the politicians. The US govt won't do anything about AI companies for at least 5 years, and when they do OpenAI, Google, and Meta will all be sitting at the table holding the pen.
mathgradthrow · 17h ago
Anyone can just kill you whenever they want. Security cannot be granted by cryptography, only secrecy.
alfalfasprout · 15h ago
You really think that Altman won’t turn around and start selling ads once enough people are on OpenAI’s “trusted” platform?
cousin_it · 20h ago
This is a great point. Everyone who has talked with chatbots at all: please note that all contents of your past conversations with chatbots (that already exist now, and that you can't meaningfully delete!) could be used in the future to target ads to you, manipulate you financially and politically, and sell "personalized influence on you specifically" as a service to the highest bidder. Just wanted to make sure y'all understand that.

EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.

EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.

hkon · 19h ago
For me, what is most scary about ai-chatbot is the interface to an exploiter.

They can just prompt "given all your chats with this person, how can we manipulate him to do x"

Not really any expertise needed at all, let the AI to all the lifting.

mycall · 17h ago
Turn that around and think of the AI itself as the exploiter. In the world of agent driven daily tasks, AI will indeed want to look at your historical chats to find a way to "strongly suggest" you do task 1..[n] for whatever master plan it has for it's user base.
matheusmoreira · 13h ago
Ah yes, the plot of Neuromancer. Truly interesting times we are living in. Man made horrors entirely within the realm of our comprehension. We could stop it but that would decrease profits so we won't.
bethekidyouwant · 19h ago
I can see how this would work if you just turned off your brain and just thought of course this will work
lordhumphrey · 19h ago
hhh · 18h ago
different flavour gpt wrapper
hkon · 18h ago
Which you of course already have done.
HPsquared · 19h ago
Or literally read out in court if you discuss anything relevant to a legal case.
pwython · 18h ago
Honestly, retargeting/personalized ads have never bothered me. If I'm gonna see ads anyway, I'd much rather get ads that might actually interest me, versus wildly irrelevant pharmaceutical drugs and other nonsense.
cousin_it · 18h ago
The ads won't be for the product which will bring you maximum value. They will be for the product that will bring the advertiser maximum profit (for example, by manipulating you into buying something overpriced). The products which are really good and cheap, giving all their surplus value to you and just a little bit to the maker, will lose the bidding for the ad slot.
jackphilson · 5h ago
Right. I think ai on the user's side is going to be necessary soon. Then they can negotiate with the advertiser's AI to determine what to show. This will need to be on the platform level or the hardware level.

This solves the problem of seeing ads that are not best for the user.

username332211 · 17h ago
Not necessary. If economies of scale exist, that means that a popular product is going to be inherently superior in terms of price or quality than an unpopular one. Companies that advertise effectively can offer a better product precisely because they advertise and have large market share. (Whether they do it or not is a question of market conditions, business strategy, public policy and ultimately their own decisions.)

Surplus value isn't really that useful of a concept when it comes to understanding the world.

reaperducer · 16h ago
a popular product is going to be inherently superior in terms of price or quality than an unpopular one.

This is so far from the reality of so many things in life, it's hard to believe you've thought this through.

Maybe it works in the academic, theoretical sense, but it falls down in the real world.

username332211 · 16h ago
Really? Because the most common place I've seen this logic break down, is the bizarre habit of people to derive some sort of status and self-worth from using an unpopular product. And to then to vehemently defend that choice in the face of all evidence to the contrary.

No "artisanal" product, from food to cosmetics to clothing and furniture is ever worth it unless value-for-money (and money in general) is of no significance to you. But people buy them.

I really can't go trough every product class, but take furniture as a painfully obvious example. The amount of money you'd have to spend to get furniture of a similar quality to IKEA is mind-boggling. Trust me, I've done it. Yet I know of people in Sweden who put considerable effort in acquiring second-hand furniture because IKEA is somehow beneath them.

Again, there situations where economies of scale don't exist and situations where a business may not be interested in selling a cheaper or superior product. But they are rarer than we'd like to admit.

esafak · 17h ago
Then why in twenty years of personalization am I still seeing junk ads? I don't want to hear about your drop-shipping or LLM wrapping business. The overwhelming majority of ads are junk. Yes, they bother me.
Nasrudith · 8h ago
Because they are selling to the advertisers and their own imagination of their 'brand'. If the advertising customer base weren't stupid then 'advertiser friendly' wouldn't exist as they would be smart enough to realize that you won't offend people by advertising to content that they are watching for entertainment, or from people saying 'die'.
BLKNSLVR · 12h ago
I'm the complete opposite and don't really understand your position.

I'd rather see totally irrelevant ads because they're easy to ignore or dismiss. Targeted ads distract your thought processes explicitly because they know what will distract you; make you want something where there was previously no wanting. Targeted advertising is productised ADHD; it is anti-productive.

Like the start of Madness' One Step Beyond: "Hey you! Don't watch that, watch this!"

apparent · 18h ago
Part of the issue is that this enables companies to give smaller discounts to people they identify as more likely to want a product. The net effect of understanding much more about every person on earth is that people will increasingly find the price of goods to be just about the max they would be willing to pay. This shifts more profit to companies, and ultimately to the AI companies that enable this type of personalization.
jazzyjackson · 18h ago
I wish I could fund an as campaign to free people from the perception that ads are to sell you products

Ads are there to change your behavior to make you more likely to buy products, e.g., put downward pressure on your self esteem to make you feel "less than" unless you live a lifestyle that happens to involve buying X product

They are not made in your best interest, they are adverserial psycho-tech that have a side effect of building a economic and political profile on you for whoever needs to know what messaging might resonate with you

FollowingTheDao · 18h ago
This, yes, thank you. Advertising is behavioral modification. They even talk about it out in the open, and if you are unconvinced, hear it from the horse's mouth:

https://brandingstrategyinsider.com/achieving-marketing-obje...

"Your ultimate marketing goal is behavior change — for the simple reason that nothing matters unless it results in a shift in consumer actions"

hdgvhicv · 16h ago
It is literally brainwashing.

Brainwashing is the systematic effort to get someone to adopt a particular loyalty, instruction, or doctrine.

reaperducer · 16h ago
Ads are there to change your behavior to make you more likely to buy products

You have described one type of ad. There are many many types of ads.

If you were actually knowledgeable about this, you'd know that basic fact.

fsflover · 1h ago
Could you enlighten us?
jazzyjackson · 9h ago
rude
koolala · 18h ago
If I'm going to be psychologically manipulated, I want my psychological profile to be tracked and targeted specifically to my everyday behaviors.

No comments yet

fsflover · 14h ago
This is not how personal targeting works. Here's how:

> Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you.

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luig...

kjkjadksj · 17h ago
You get ads that actually interest you with targeted ads? You might be one of the only people with that experience. The whole meme with targeted ads is “I looked up toilet paper on Amazon once now I get ads for charmin all over the web”
fragmede · 17h ago
I stopped using TikTok and Instagram because I was impulse purchasing too much stupid crap from their advertisement. So there are at least two of us out there.
kjkjadksj · 3h ago
What compelled you to buy any of that junk?
0x696C6961 · 20h ago
AI makes large scale retroactive thought policing practical. This is terrifying.
apples_oranges · 19h ago
I think people will come to the conclusion that they don’t want to post anything online. Private chats, if they exist, will stay but already, if you generate a work of art, you donate it more or less to the shareholders of Google or Meta. This is even before the thought policing and similar implications.
dude250711 · 13h ago
"...post anything online..." you probably mean "express true thoughts or emotions externally in a proximity of an electronic device".
sherburt3 · 16h ago
I would argue that's been happening for a long time with much simpler methods. Take this website for example, I get points for posting comments that people upvote. If I post a comment that people don't like, I will get downvoted. I obviously want more points because... so I naturally work to craft the comment that I believe will net the most upvotes. This isn't 100% terrible because this helps weed out the assholes, but I think it has much more insidious effects.

Feel free to call me an accelerationist but I hope AI makes social media so awful that no one wants to use it anymore. My hope is that AI is the cleansing fire that burns down social media so that we can rebuild on fertile soil.

j45 · 20h ago
Like search histories but far more.
tim333 · 16h ago
Roko's basilisk is on it's way.
sterlind · 11h ago
I'm not worried about the Basilisk, I'm worried about the AI researchers who believe they're being blackmailed by the Basilisk into creating it.

...hm. maybe I am worried about the Basilisk, then.

bArray · 16h ago
Protected chats? The ship already sailed, text messages via the phone network are already MITM'd since a very long time.

Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).

The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.

The days of privacy sailed unnoticed.

8bitsrule · 4h ago
No devices, no passwords were ever required of any of us (yet). We deliberately walked into this, we can walk right back out again. Will we have to make do with less? (Did we really need it anyway?)

While I owned a cellphone, I never left the house with it. While using it, my location was always known.

None of this is terribly new. For decades our names and addresses were collected in telephone books (along with our numbers) that were given away to everyone. Telecorp doxxing. That too was involuntary.

The company whose buses I carded into (instead of paying cash) knew where I got on and got off. Once you get tuned into these 'services', you are in a position to limit their accuracy. That skill can be refined. We don't have to become predictable.

pjjpo · 2h ago
AFAIK there isn't currently any surveillance on search - searching for how to make a bomb does not lead to police at your doorstep the next day. But the police raid after a mass shooting generally brings up plenty of similar searches.

Maybe it could be good to have some integrations between this data and law enforcement to reduce leading to tragedy? Maybe start not with crime but suicide - I think a search result telling you to call this number if you are feeling bad saves far less lives than a feed into social workers potentially could.

Just a thought, and this isn't have a computer sentence someone to prison but providing data to people to in the end make informed decisions to try to prevent tragedy. Privacy is important to a degree but treating it as absolute seems to waste potential to save lives.

citizenpaul · 18h ago
Lets not forget that Gabereial Weinburg is a two faced ghoul or wolf in sheep clothing. He has literally said he does not believe people need privacy yet that supposedly is the duckduckgo's main selling point. He has made all kinds of tracking deals with other companies so duckduckgo "is not tracking you" just their partners are.

Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.

hsartoris · 17h ago
This is a pretty serious allegation, but cursory searching didn’t yield anything. Do you have any sources you can point to? Being as it’s very difficult to actually ‘whitewash’ things from the internet, I would expect there is something to point to. Thanks!
mixmastamyk · 13h ago
They use(d?) bing and collect extensive metrics like exactly what you click. Have confirmed this with browser tools, and mitigate it with adguard and plugins.

Must sell it somehow. Likely but have not seen evidence.

kogasa240p · 17h ago
Interesting find, wasn't he selling info to Microsoft?
doitLP · 17h ago
This is baseless with no references or citations. What possible incentive would he have for sounding this alarm and urging congress to act when he could be using the data he secretly is collecting for his own gain when all the AI companies are doing it openly for obvious gain
reaperducer · 16h ago
Interesting find, wasn't he selling info to Microsoft?

It's not a find. It's an allegation.

HN is supposed to be better than that.

iambateman · 20h ago
If the author sees this…could you go one step further, what policy specifically do you recommend?

It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?

The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.

yegg · 20h ago
Thanks (author here). I am working on a follow-up post (and likely posts) with specific recommendations.
FollowingTheDao · 18h ago
While I agree with your take on the harms or AI surveillance, I will never agree that AI is beneficial, and there is a net negative outcome using AI. For example electricity prices, carbon release, hallucinations, cognitive decay...they all outweigh what benefit AI brings, which still is not clear.

Like nuclear fission, AI should never have been developed.

fragmede · 17h ago
As well as crypto.
martin-t · 20h ago
LLM providers should only be allowed to train on data in public domain or their models and outputs should interior the license of the training data.

And people should own all data about themselves, all rights reserved.

It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.

Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.

j45 · 20h ago
Makes sense, have to deal with the cat being out of the bag though.

The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.

New models following this could create a gap.

I'm sure competition as has been seen from open-source models will be able to

martin-t · 19h ago
It's simple, the current models and their usage is copyright infringement.

Just because everyone is doing it doesn't meant it's right or legal. Only that a lot of very rich companies deserve to get punished and pay the creators.

j45 · 2h ago
I was referring to the issue of the new models having to train different than the original ones.

Not arguing, debating about the legality of what the models have done.

Anthropic just paid a settlement. But they also bought a ton of book and scanned them, which might be more than other models. Maybe it's a sign of things to come.

beepbooptheory · 20h ago
From the TFA:

> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.

I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.

By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?

slt2021 · 20h ago
the law could be as simple as requiring to blur faces and body silhouettes of all people inside each camera, prior to any further processing in the cloud, ensuring privacy of the CCTV footage.
sacul · 19h ago
I think the biggest problem with chatbots is the constant effort to anthropomorphize them. Even seasoned software developers who know better fall into acting like they are interacting with a human.

But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.

I think if society were trained to treat AI as NOT human, things would be better.

pwython · 18h ago
I'm not a full-time coder, it's maybe 25% of my job. And am not one of those people that have conversations with LLMs. But I gotta say I actually like the occasional banter, it makes coding fun again for me. Like sometimes after Claude or whatever fixes a frustrating bug that took ages to figure out, I'll be like "You son of a bitch you fixed it! Ok, now do..."

I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.

mejutoco · 19h ago
> I think if society were trained to treat AI as NOT human, things would be better.

Could you elaborate on why? I am curious but there is no argument.

sacul · 18h ago
Yeah, thanks for asking. My reasoning is this:

That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.

Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.

Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.

There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.

AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.

fragmede · 17h ago
But why not? If we look past the trappings of "we hate corporations", why not treat it as a friend? Let's say you acquire a free-trade organic GPU and run an ethically trained LLM on it. Why is an expensive funny-shaped rock not allowed to become a friend when a stuffed animal can?
aduwah · 16h ago
The stuffed animal has only a sentimental value and it will not have a statistics based and geopolitically biased opinion that it shares with you and influences your decisions. If you want to see how bad a chatbot can be as a friend, see the recent case when it has driven a poor mentally vulnerable minor to suicide
mu53 · 15h ago
There is a new term, AI Psychosis.

AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.

It really takes the bad parts from social media to the next level.

BriggyDwiggs42 · 17h ago
Yeah I’ve loved seeing people call them clankers and stuff for that reason.
olyellybelly · 20h ago
The hype industry around AI is making too much money for governments to do anything about it that's actually needed.
testfrequency · 20h ago
Especially in the US right now where they are doing whatever it takes to be #1 in ~anything, ethical or not. It’s pure bragging rights and power, anything goes - profit is just the byproduct.
mark_l_watson · 16h ago
#1, likely not now:

Most of my close friends are non-technical and expect me to be a cheerleader fir USA AI efforts. They were surprised when I started mentioning the recent Stanford study that 80% of US startups are using Chinese models. I would like us to win but we seem too hype focused and not engineering and practical applications focused.

BLKNSLVR · 12h ago
First they came for climate science, but I said nothing because I was not a climate scientist.

Then they came for medical science, but I said nothing because I was not a doctor.

Then they came for specialists and subject matter experts, and I said nothing because I was an influencer and wanted the management position.

testfrequency · 15h ago
I agree
hungmung · 19h ago
America is in love with privatized surveillance, it helps get around that pesky Constitution that prohibits unwarranted search and seizure.

"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.

antegamisou · 17h ago
Don't be hesitant to suggest that YC loves too funding questionable ideas all the time.
whyenot · 16h ago
It seems highly unlikely to me that it will be banned by congress in the next few years, if ever. So, what we really should be asking is how do we live in a world of pervasive surveillance, where everything we do and say is being recorded, saved, analyzed, and potentially used to manipulate us.

As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.

jazzyjackson · 7h ago
That’s the world Los Altos café-goers have, there’s lots of worlds out there to choose from.
jjulius · 14h ago
Opt out as much as possible.

If spaces like that irk you, stop going there. Limit your use of the Internet to when you're at home on your own network. Do we truly need to be browsing HN and other sites when we're out of the house?

Ditch the smartphone. Most things that people claim you need a smartphone for "in order to exist in modern society" can also be done via a laptop or a desktop, including banking. You don't need access to everything in the world tucked neatly away in your pocket when you're going grocery shopping, for instance.

Buy physical media so that your viewing habits aren't tracked relentlessly. Find hobbies that get you away from it all (I backpack!).

Fuck off from social media. Support hobby-based forums and small websites that put good faith into not participating in tracking and advertising, if possible.

Limit when you use the internet, and how.

It's hard to escape from it, but we can significantly limit our interactions with it.

trod1234 · 15h ago
When you say pervasive, WIFI and Cameras aren't that pervasive. What actually is is quite a bit worse than this.

For example,

The WIFI signals can uniquely identify every single heartbeat in realtime within a certain range of the AP, multiple linked access points increase this range up to a mile. The radio devices you carry around with you unknowingly beacon at set intervals tracking your location just like an animal on a tracking collar. This includes the minute RFID chips sewn into your clothing and fabrics.

The phones don't turn off their radios when in airplane mode. Your vehicle had at least 3 different layers that uniquely beacon a set of identifiable characteristics to anyone with a passive radio. OBD-II uplink, TPMS sensors (one for each wheel), and Telematics.

Home Depot in cooperation with Flock, has without disclosure captured your biometrics, and tracked your minute movements and put that up for sale to the highest bidder through subscription based profiling.

Ultrasonic beacons are emitted from your phone to associate geographically local devices to individual people. All visible to anyone with a SDR, manipulable by those with a Flipper0, and treated as distinct sources of truth in a layered approach.

All aspects of social interaction with the wider world have now been replaced with a surrogate that runs through a few set centralized points that can be turned off/degraded to drive anyone they wish into poverty with no visible indicator, or alternative.

Imagine you are a job seeker, the AI social credit algorithm they've developed to target wealthy people on one side, and to torture/make people better incorrectly identifies you as a subversive, and so they not only degrade your existing services but isolate all your communications from everyone else intermittently through failure following a statistical approach similar to Turing during WW2.

Imagine the difficulty of finding work in any specialized field which you have experience for, where you can never receive those callbacks because they are inherently interrupt driven; and interrupt driven calls are jammed without your ability to recognize the jamming. Such communications are vulnerable to erasure.

Should any system ever exist whose sole purpose or impact has become to prevent a arbitrary target through interference from finding legitimate work, or other aspects to feed themselves or exercise their existential rights.

In effect such a system of control silently makes these people into slaves without recourse, or informed disclosure. It fundamentally violates their human rights and "these systems exist".

Failure of government to timely uphold the social contract promises and specifics of the constitution becomes after-the-fact purposeful intent through the gross negligence and failure to uphold their constitutional oaths. History has shown that repeatedly if the civilization survives at all, it repeatedly reforms itself through violence. Something no good person wants, but given the narrowing of agency and choice to affect the future; it is the only alternative when the cliff of existential exctinction is present (whether people realize that or not).

gcanyon · 15h ago
Is there anyone who has a credible take on how we avoid an effectively zero-privacy future? AI can identify people by their walk, no facial recognition required, and now technology can detect heartbeats through variations in wifi signals. It seems guaranteed we are heading for a future where someone knows everything. The only choice is whether it's a select few, or everyone. The latter seems clearly preferable.
pilingual · 14h ago
Legislation or, my favorite, market forces.
BLKNSLVR · 12h ago
"Market forces" is in serious decline as a leveller at this point in history, especially around products of interest to power or those producing outsized profits.

Or maybe it never was and this fact is just becoming more transparent.

EchoReflection · 9h ago
we're fooling ourselves if we think there's "still time". AI surveillance is just too powerful and valuable for companies/governments to NOT use it. It's just like saying "ok let's all agree to not increase our power and capabilities". Nobody thinks humanity would collectively agree to that, and for good reason (unfortunately).
BLKNSLVR · 12h ago
From reading the comments I'm getting vibes similar to Altered Carbon's[0] AI hotels that no one uses.

The opposite of "if you build it they will come".

(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)

[0]: https://en.m.wikipedia.org/wiki/Altered_Carbon

zvmaz · 20h ago
The problem is, we have to take the word of companies for our privacy.
yupyupyups · 19h ago
Wrong. Excessive data collection should be banned.
quectophoton · 18h ago
But this is not excessive, it's Legitimate Interest and absolutely needed to provide a good service /s
gblargg · 12h ago
As long as the first ones to be surveilled are the companies that make it (including their employees) and all politicians who vote for it. We need to be able to access all the data the AI gathers from these groups.
Lerc · 20h ago
I think much of the philosophical discussion on the pertinent issues here have been discussed at length in the context of Legal, Medical, or Financial advice.

In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.

I think AI needs recognition as a similarly protected class.

AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.

It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.

I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.

Some of the others are along the lines of

It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.

A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.

woadwarrior01 · 18h ago
Contrary to their privacy-washing marketing, DuckDuckGo serves cloaked Bing ads URLs with plenty of tracking parameters. Is that sort of surveillance fine?

https://imgur.com/a/Z4cAJU0

rsyring · 20h ago
IMO: make all the laws you want. They generally won't be enforced and, if they are, it will take 5-10 years to make it's way through the courts. At best, the fines will be huge and yet account for maybe 10% of the revenue generated by violating the law.

The incentives are all wrong.

I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.

Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.

Make the laws, it will help, a little, maybe.

But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

martin-t · 18h ago
The law really should be "if you cause harm to others you will receive 2x greater harm done to you". And "if you profit from harming others, you will compensate them by 2x of what you gained".

Instead of the current maze of case specific laws.

---

> But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

You know, you're just unwilling to think it because you've been conditioned not to. It's what always happens when inequality (of income, power, etc.) gets too high.

ddq · 15h ago
martin-t · 12h ago
Y'know, I used to say 1.5-2x, now I lean towards 2x but 3x is actually fine by me too. And for any kind of financial crimes, this should also be divided by the probability of getting caught and convicted.
rightbyte · 18h ago
> I'm fundamentally a capitalist

Being a capitalist is decided by access to capital not really a belief system.

> But, there really is just too much concentrated wealth in these orgs.

Please make up your mind? Should capital self-accumulate and grant power or not?

Portraying capitalism as some sort of force of nature that one doesn't "know another system that will work better" might be the neoliberals biggest accomplishment.

lordhumphrey · 18h ago
The worker-drones powering the software world have so far resolutely not managed to reflect on their primary role in implementing the dystopian technological landscape we live in today, when it comes to privacy.

Or they have and they simply don't care, or they feel they can't change anything anyway, or the pay-check is enough to soothe any unease. The net result is the same.

Snowden's revelations happened 12 years ago, and there were plenty of what appeared to be well-intentioned articles and discussions in the years that followed. And yet, arguably, things are even worse today.

bubblebeard · 17h ago
While I cannot see a way to effectively stop companies from collecting data from you (aside from avoiding practically everything), that doesn’t mean we should do nothing.

DuckDuckGo aren’t perfect, but I think they do a lot to all our benefit. Theirs have been my search engine of choice for many years and will continue being so.

Shout outs to their amazing team!

jmort · 17h ago
I think a technical solution is necessary, rather than a legal/regulatory one
tantalor · 20h ago
This is an argument against chatbots in general, not just surveillance.
beepbooptheory · 20h ago
Doesn't seem the case because they do end up advertising the duckduckgo chatbot as a safe alternative.
quectophoton · 17h ago
They mention that they're "demonstrating that privacy-respecting AI services are feasible", knowing their duck.ai is sending the prompts to other AI services, and then in the same paragraph they mention leaks and hacks.

To their credit, their privacy policy says they have agreements on how the upstream services can use that info[1]:

> As noted above, we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance).

But even assuming the upstream services actually respect the agreement, their own privacy policy implies that your prompts and the responses could still be leaked because they could technically be stored for up to 30 days, or for an unspecified amount of time in the case of the exceptions mentioned.

I mean, it's reasonable and a good start to move in the direction of better privacy, way better than nothing. Just have to keep those details in mind.

[1]: https://duckduckgo.com/duckai/privacy-terms

gdulli · 18h ago
Puffery isn't evidence for or against anything.
ankit219 · 20h ago
> your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations

Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.

While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.

drnick1 · 15h ago
The dude is against "surveillance," but his blog has dependencies (makes third party requests) to Cloudflare, Google, and others. Laughable.
add-sub-mul-div · 19h ago
Cool, but they're shoving AI into their products and trying to profit from the surveillance etc. that went into building that technology so this just comes across as virtue signaling.
aunty_helen · 19h ago
This guy has been know to fold like a bed sheet on principles when it’s convenient for him.

> Use our service

Nah.

furyofantares · 18h ago
I'm really impressed with how menacing Facebook feels in the cartoon on the left. And then a massive Google lurking in the background is excellent, although it being a silhouette of The Iron Giant takes a lot away from it for me.

The ChatGPT translation on the right is a total nothingburger, it loses all feeling.

catigula · 17h ago
I think this type of AI doomsday hypothesis rubs me the wrong way because it's almost quaint.

Merely being surveilled and marketed at is a fairly pedestrian application from the rolodex of AI related epistemic horrors.

dyauspitr · 18h ago
You through enough identifiers into the mix and even low level employees will be able to get a summary of your entire past in seconds. It’s a terrifying world and I feel bad for gen z and beyond.
FollowingTheDao · 18h ago
I cannot understate how afraid I am of the use of AI Surveillance. The worst thing is there is nothing you can do about it. It does not matter how private I am online, if the person I am sending things to is not privacy conscious and, say, uses AI to summarize emails, then I am in the AI database. And then there is just the day to day data being scraped, like bank records, etc.

I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.

Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.

[1] https://www.youtube.com/watch?v=d-7o9xYp7eE

swayvil · 19h ago
Surveillance is our overlords's favorite thing. And AI makes it 1000x better. So good luck banning it.

Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.

pessimizer · 20h ago
This is silly, and there's no time. We can't even ban illegal surveillance i.e. we can write whatever we want into the law, and the law will simply be ignored.

The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.

Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.

And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.

That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.

soulofmischief · 19h ago
You're right on all points, but it's easy to come to such a conclusion. The harder, and more rewarding path, is to organize with others and figure out what can be done even if it seems hard or impossible, because standing around observing our rapid decline isn't good for anyone.

If the government is failing, explore writing civil software, providing people protected forms of communication or modern spaces where they can safely organize and learn, eventually the current generations die and a new, strongly connected culture has another chance to try and fix things.

This is why so many are balkanizing the internet age gating, they see the threat of the next few digitally-augmented generations.

throwaway106382 · 19h ago
Unless it’s banned worldwide by every country by binding treaty this this will never work.

Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….

Like it or not it’s a mutually assured destruction arms race.

AI is the new nuclear bomb.

rightbyte · 18h ago
So your hypothesis is that the surveillance state is some sort of necessity that brings an great edge?
martin-t · 18h ago
While real nukes still work.

What bad thing exactly happens if China wins? What does winning even mean? They can't invade because nukes.

Can they manipulate elections? Yes, so we'll do the opposite of the great firewall and block them from the internet. Block their citizens from entering physically, too.

We should be doing this anyway, given China is known to force them to spy for them.

fragmede · 17h ago
If China gets to ASI first, the nukes won't matter.
UncleMeat · 17h ago
"Don't worry, things will be good if the amoral megacorporations and ascendant fascist government in the US gets there first" is not my idea of a good time.
mrob · 15h ago
If any country gets ASI first, nukes won't matter. For an ASI, most tasks are more reliably accomplished with all biological life dead. Making a safe ASI is vastly more difficult than the default option of letting it kill everything, so the default will be attempted first.
martin-t · 14h ago
There's two interpretations of the parent comment:

1) China will get ASI and use it to beat everyone else (militarily or economically). In my reply, I argue we shouldn't race China because even if ASI is achieved and China gets it first, there's nothing they can do quickly enough that we wouldn't be able to build ASI second or nuke them if we couldn't catch up and it became clear they went to become a global dictatorship.

2) China will get ASI, it'll go out of control and kill everyone. In that case, I argue even more that we shouldn't race China but instead deescalate and stop the race.

BTW even in the second case, it would be very hard for the ASI to kill everyone quickly enough, especially those on nuclear submarines. Computers are much more vulnerable to EMPs than humans so a (series of) nuclear explosion(s) like Starfish Prime could be used to destroy all of most of its computers and give humans a fighting chance.

martin-t · 17h ago
ASI doesn't change physics.

Perun has a very good explanation why defending against nukes is impossible to do economically compared to just having more nukes and mutually assured destruction: https://www.youtube.com/watch?v=CpFhNXecrb4

fragmede · 17h ago
Who said anything about economically?
martin-t · 14h ago
Any potential ASI would still be limited by raw resources and even if you assume all work would be done by robots, somebody would still have to build those robots first. Such a ramp up in production would give everyone else time to build their own ASIs or strike first.