An appeal to companies doing AI

58 todsacerdoti 42 5/4/2025, 11:28:36 PM soatok.blog ↗

Comments (42)

mattgreenrocks · 4h ago
> Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.

I have noticed something similar: those who are ultra-passionate about AI are often Extremely Online, and it seems like their values tilt too far away from humanity for my taste. The use of AI is treated almost as an end in and of itself, which perpetuates a maximalist AI vision. This is also probably why they give off this weird vibe of having their personality outsourced.

akomtu · 3h ago
AI is the product of the most advanced scientific minds stripped of ethics. What else can they create?
Mbwagava · 25m ago
I can't read the tone of this post, but "AI" as it stands as a marketing term for the last ~45 years has little to do with rigor. These are workers producing profit, not scientists. Ethics has nothing to do with it. Scientists deal with empiricism, not sales.
Handprint4469 · 4h ago
> But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures.

This is a good point, and somewhat subtle too. Something that worries me is the acceleration of the feedback loop. The Internet, social media, smartphones, and now generative AI are all things that changed how information is generated, consumed and distributed, and changing that affects the incentive structures and behaviors of the people interacting with that information.

But information is spread increasingly faster, in higher amounts and with higher noise, and so the incentives landscape keeps shifting continuously to keep up, without giving people time to adapt and develop immunity against the viral/parasitic memes that each landscape births.

And so the (meta)game keeps changing under our feet, increasingly accelerating towards chaos or, more worryingly, meta-stable ideologies that can survive the continuous bombardment of an adversarial memetic environment. I say worryingly, because most of those ideologies have to be, by definition, totalizing and highly hostile to anything outside of them.

So yeah, interesting times.

karategeek6 · 1h ago
This post sounds too much like an SCP for my comfort. Just administer the amnestics now.
danpalmer · 4h ago
Tech companies, at least those who weren't founded on AI, have a significant number of people internally who have exactly the opinions in this blog post.

Outside of the tech ecosystem, most people I encounter either don't care about AI, or are vaguely positive (and using it to write emails/etc). There are exceptions, writers and artists for example, but they're in the minority. To be clear, I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.

I realise it may not seem like it, but most big tech companies are not designing for high earning valley software engineers because they are not a big market. They're designing for the world, and the world hasn't made its mind up about AI yet.

baobun · 3h ago
Counter-point: Dark patterns are also pervasive in more niche applocations including B2Bs targeting tech startups. Some of it is culturally endemic.

Obsession with one-size-fits-all, metrics-driven development, and UX excusively aiming for the lowest common denominators are also part of this problematic incentive structure you allude to.

> I'm also not seeing "normal" people raving about it much either, but most people really do not share the opinion of this post.

I don't think the "we" here was intended to include the general population.

exiguus · 3h ago
> What I found most interesting is that the multiple choice options included a lot of “I found the Terminator movies scary”, “I read too much Ray Kurzweil”, and/or “I am or was a SF Bay Area rationalist” undertones, but actual ethical objections were strangely absent.

After reading Sarah Wynn-Williams' book and see the current state of democracy in the USA and some European countries (apart from the fact that democracies are too slow to regulate anyway), I see little hope for the future.

Mbwagava · 4h ago
It's not even clear "superintelligence" is a meaningful concept. It could be that the broad conflicts in our society are largely intractable and arbitrary, in which case "superintelligence" can do little but complain about the contradictions its passed (something I suspect many on this forum can understand).

Perhaps the calculator is as close as we'll ever get to "superintelligence".

kevinsync · 4h ago
I'm very much into AI, in terms of judicious and sprinkled usage across disciplines -- but I'm a computer guy, I produce and support a lot of code, art, audio, video, social, multimedia etc content, and I've incorporated bits and pieces of AI here and there to achieve a vision or get work done (LLMs, diffusion, etc) -- my main issue with it, like anything technical, is that the mainstream interpretation / instinct by the average bozo is that the machine will do my work for me, and nothing could be further from the truth.

There's a lot of anti-AI sentiment from the mainstream, but I'm noticing the pro-AI mainstream sentiment comes from people who are either technically-minded grifters looking to deploy automated solutions to snake $$$ from people's pockets, or lazy / disengaged worker drones who just want the computer to do their middling work for them. And it will, up to a point, where your "work" plateaus into a mess of predictable, non-novel banality lol, unless you invest the time to master the tool (which, for what it's worth, isn't like introducing the toothed saw, it's more like a Dremel or a SAWZALL that have specific purposes but casual users won't ever master them)

Buying a digital ELPH in 2001 didn't make you a photographer unless you were a photographer with an open mind. Squarespace doesn't make you a web designer unless you've studied the system and understand the tradeoffs. AWS doesn't solve infrastructure unless you learn how to architect a solution that works for your use-case. AI, by the same property, is shit until you research how it works, experiment with solutions, and find novel workflows to get something out the other end that's new, fresh and exciting.

Companies just rolling "AI" into their products aren't gonna win over customers and users unless they use the tool to deliver something of exceedingly-needed value. If it's a short-term grift or "hail mary", good luck! You'll need it!

dangus · 3h ago
This is a great way to put it.

Some more annoying personas in the AI space:

- AI CEOs lying to investors and claiming their AIs will one day be impossibly smart.

- Companies that are consumers of AI products having CEOs pushing AI onto their employees as a quick fix thinking that they will get magic productivity gains and be able to cut staff if they just force employees to use it (I have even seen some real examples of companies adding AI usage to performance reviews)

- Companies claiming to be AI-first without launching any significant AI-powered product

But I do think it’s a very measured way to read the situation to refrain from joining the knee-jerk into being an AI-hater. That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.

I think that some professionals who refrain from leveraging it from an ethical standpoint will legitimately fall behind their labor competition.

diggernet · 2h ago
> (I have even seen some real examples of companies adding AI usage to performance reviews)

Sad to say, I can vouch for this.

some_furry · 3h ago
> That sentiment is basically just a counter-culture reaction to dislike AI, especially since it seems to most negatively impact creatives such as illustrators.

Worth considering: the blog in question that hosts this article is a furry blog. The furry community is largely creatives.

dangus · 2h ago
And I totally understand why counter cultural types have latched on to disliking AI especially since some of its biggest proponents are incredibly corporate and, well, lame as fuck.

I also think that it’s a technology that didn't develop with some counterculture chops like many earlier technology innovations.

E.g., we could think about something like crypto that had an uphill battle against the establishment and was created with some level of ideological independence.

There are even some more corporate disruptions that plain and simple had better marketing behind them, like how Airbnb and Uber had widely disliked incumbents to “beat” in the market. Early Uber or Airbnb users were basically “beating the system.” At least, that’s how a lot of people perceived them, even if that didn’t turn out to be the reality.

In contrast, AI has felt much more like a corporate circlejerk among the wealthiest super-billionaires. There hasn’t even been the slightest facade of genuine do-goodery in this technology. Some wildly well-funded companies led by sociopathic robot-human CEOs made a plagiarism machine that my boss now insists I use for all my work.

I think that usually the people in the middle of the two extremes have the right thought process going on. It’s clear to me that AI is a great tool that isn’t going away, but perhaps its most passionate champions and detractors both need to settle down.

some_furry · 3h ago
This is an incredibly level-headed and good take.
streptomycin · 4h ago
> I do not actually believe “The Singularity” is a realistic threat due to every system that exhibits exponential growth encountering carrying capacities, which converts it into an S-curve.

Depending on the parameters of the curve, an S-curve may be effectively the same as an exponential curve. For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.

baobun · 3h ago
I don't see why an LLM (not) maxing out on IQ would be an existential concern.
some_furry · 4h ago
> For instance if the IQ of AIs reach a plateau of 500 rather than exponentially increasing to infinity, we may not be around to see the plateau.

If the premise for this is, "because we might not survive each other," rather than the AI being specifically an extinction event for humanity, then I think we agree.

gmuslera · 4h ago
He is misplacing his concerns. He and the ones like him, to fulfill the "we" there.

Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance. But it seem to be perfectly OK that it does that for the rest of the world.

It enables people to do non-consensual pornography. It may have democratized the realistic video part of it, but already was available the fan fiction part. Is the problem the democratization or the ability on who have enough budget or sponsored agenda for it? At some moment you have to cut somewhere and define where is the realistic line.

Same goes for misinformation, what is wrong is the democratization and not that the people with enough resources can do it?

About displacing industries, it depends, but was already a big abuse of some of those industries to people. Some will adapt. Some will become obsolete as it happened with the industries they replaced in their own turn.

AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.

charleslmunger · 4h ago
>Oh, in US, for US citizens, intelligence agencies can't do dragnet surveillance.

They can, and do? What else would you call https://www.theguardian.com/world/2013/jun/06/nsa-phone-reco...

That's just one example, Snowden published tons of this stuff.

some_furry · 4h ago
Conveniently the rest of the sentence from the article was "of encrypted messages".

The encrypted part is kind of important. I blog about the topic a lot!

baobun · 4h ago
> AI is a tool. And as any tool, it empowers people using it, for good and bad. Is the people that you keep giving power the main ones misusing them. Those are the elephants in the room that you refuse to see.

Try reading the post again? It's in there.

bugahbo · 4h ago
Well since you claim the article does check all the boxes no point reading it.

More impotent rage internet spam, zero direct call to action politically. Just circling existential dread in different words, I’ll bet.

The social gossip changed and no one knows which way is up despite the sky being right there still?

tomlockwood · 4h ago
I've tolerated AI autocomplete in VSCode but I am a bee's dick away from turning it off, because it so often generates a huge chunk of code that is ALMOST correct, and determining where it is wrong is as much a chore as writing it would have been. But, it's like I've got a junior-coder sidekick who doesn't take any feedback. Not great.
pixelready · 4h ago
Ahead of the inevitable Luddite comments. Here’s your daily reminder that the Luddites were not just technophobes, but were in fact artisans who were concerned about the leverage technology was providing capital to suppress worker rights while eroding the quality of the products. This tension should resonate with us.

https://www.history.com/articles/industrial-revolution-luddi...

The AI conversation tends to split folks along similar “passionate engineer craftsman” vs. “temporarily embarrassed billionaire” lines.

scotty79 · 4h ago
My only concern about AI is that it will get stuck again for another few decades and I'll become elderly before I see what's next.

20 years ago my dream was to see a nimble robot running up the mountain path live. I hope this event is not another 20 years away. Future comes so horribly slowly.

beefnugs · 2h ago
Yes there was something truly special in the movie Mother, when that heavy ass robot was thumping down the hallway. Make sure to watch the boston dynamics outtakes when they break an ankle to alleviate some existential tension afterward
dangus · 3h ago
I actually think that many of the concerns the author doesn’t have are the more concerning ones and some of the concerns they do have are more likely to be not much of a big deal.

The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.

Any app that is willing to cross that line already has done so (e.g., Facebook).

It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.

baobun · 2h ago
On privacy, the shift in incentives have changed the game. "More is better" is a new mantra and the perceived value of gathering and labeling arbitrary organic data has gone up significantly. This offsets or outright obliterates the liability aspect of data-hoarding. This will have privacy implications for individuals referenced in some of those datasets.
forrestthewoods · 4h ago
> why we dislike AI

For some completely unspecified group of “we”. At least the post itself says “why I personally dislike AI”.

IAmGraydon · 4h ago
Here’s the thing…as long as we live in a capitalist society, money will be put in front of people because the very essence of capitalism is to turn a higher profit and anything that impedes that is evolved away. So the question to ask is whether we will remain a capitalist society. If you believe the answer is yes, the only thing to do is adapt, whether you like it or not. Resisting the change will only put you behind the curve. I’m not proclaiming a stance on AI here, but I think it’s prudent to be a realist.
o11c · 4h ago
[flagged]
cyanydeez · 4h ago
We're in the massively paralleled "Motivated Reasoning Era"
djoldman · 4h ago
> I’m concerned about the kind of antisocial behaviors that AI will enable.

> Coordinated inauthentic behavior

> Misinformation

> Nonconsensual pornography

> Displacing entire industries without a viable replacement for their income

The first three of these existed and occurred before the arrival of AI. Perhaps AI makes doing the first 3 easier. If there are not laws governing the first three post-AI, do we need laws governing them? If so, what do those look like?

As for "displacing entire industries without a viable replacement for their income" - yea, as a civilization we need to retrain and reeducate those whose livelihoods are displaced by automation. This too has been true forever...

binary132 · 1h ago
Artists being replaced by idiot machines that can do approximately 15% as good of a job but almost for free is very sad and bad.
cryptozeus · 4h ago
Disabling ai at this point is like using your app in off line mode. Not a lots of people want to build for that
biophysboy · 4h ago
> If you do not understand people, you will fail to understand the harms that AI will unleash on the world.

Case in point: the official white house social media account regularly posts low-effort AI meme propaganda.

minimaxir · 3h ago
The White House very much understands the harm they are unleashing. That's the point.
biophysboy · 3h ago
Yes, obviously the White House understands - I am saying that AI companies failed to predict political consequences (or, perhaps more likely, chose to look the other way)