Why I don't ride the AI Hype Train

76 mertbio 86 7/9/2025, 11:29:46 AM mertbulan.com ↗

Comments (86)

JKCalhoun · 6h ago
> Ever since ChatGPT came out, the tech world has jumped on a new hype train—just like it did before with crypto, NFTs, and the metaverse.

Personally, I eschewed all of those — except for LLMs. I'm convinced this one's for real. People though can use "hype" to mean a number of things.

AGI by the end of the year? Hype.

Decimation of white-collar jobs? Hype.

Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.

> Eventually, these companies tried something new: agents.

Yeah, that one's still on the hype shelf for me.

> This floods social media and websites with low-quality, copy-paste content.

No! Welp, there goes social media. ;-)

> ChatGPT has around 500 million weekly active users, but only around 20 million of them actually pay for a subscription. That means the vast majority of people think it’s not worth $20 a month.

You could say the same for YouTube (and likely wildly surpass that scenario).

When you offer a free version, don't be surprised if most users (meekly raises hand) save their pocket money for something else. These are early days and people are sussing out what the thing can do for them.

> To me, Apple stands out. They aren’t trying to build a chatbot that knows everything. Instead, they’re focused on using AI as a tool to help people interact with their apps and data in smarter ways.

That feels like Apple in gap-filling mode: trying to show the world they're doing something while smart people are trying to figure out what Apple really ought to be doing.

They have their own chip dies — could perhaps do a dedicated LLM client architecture that allows it to run on-device? It makes you wonder what Jony Ives (and investors) could possibly be thinking when Apple could easily pivot and own the aiPhone market.

Waiting for my own aiPhone someday — with encrypted history saved to my cloud account. Wondering what it will be like for future generations who will have had a personal confidant for decades — since they were teenagers…

JohnMakin · 6h ago
This is the appropriate view, but the market’s gone all in on hype so we must deal with the consequences when the hype doesnt deliver. It was the same for me with the blockchain/web3 hype craze - I saw interesting game changing applications but nothing anywhere near what the market was going nuts for. the funniest thing is that hype train seamlessly transitioned to “AGI” without batting an eye or an ounce of shame.
mercer · 4h ago
I'm somewhat skeptical, but also honestly curious: what game-changing applications came out of the blockchain/web3 hype craze?
JohnMakin · 21m ago
The area I was briefly interested in was in the fintech/lending space. Not an expert though. I saw some cool ideas out of it at that time and interviewed several rounds with a company in that space.
bigyabai · 1h ago
Drug and firearm escrow services.
tim333 · 4h ago
One for the few interesting ones is Polymarket, the betting platform.
voxleone · 5h ago
>Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.

I generally agree with your statements. But i personally tend to think of of the various ML flavors as only a natural evolution of the same Turing/von Neumann paradigms. Neural networks simulate aspects of cognition but don’t redefine the computation model. They are vectorized functions, optimized using classical gradient descent on finite machines.

Training and inference pipelines are composed of matrix multiplications, activation functions, and classical control flow—all fully describable by conventional programming languages and Turing Machines. AI, no matter how sophisticated, does not violate or transcend this model. In fact, LLMs like ChatGPT are fully emulatable by Turing machines given sufficient memory and time.

(*) Not playing the curmudgeon here, mind you, only trying to keep the perspective, as hype around "AI" often blurs the distinction between paradigm and application.

JKCalhoun · 55m ago
You're right. I meant "new paradigm" though more in regard to its societal adoption — not that the number crunching going on in the GPUs was some new tech.
edanm · 3h ago
Umm, it's a pretty fundamental theorem (that's assumed to be true) that all computation is equivalent to Turing Machines. Unless we're very wrong about fundamentals of CS here, we'll never see anything that is more powerful, computationally speaking, than a Turing machine.
boesboes · 5h ago
This. AI replacing all software engineers? Hype. Me, a SE with 20 years experience, using AI to do the heavy lifting of refactoring 2500 code points and corresponding tests, so that i can focus on what is the correct solution? That works very nicely. Or just last week, I managed to do some research into how (we think) the brain works, and draft a position paper based on the research and my thoughts in a day or two, instead of weeks. I don't care about AGI, or wether LLMs might approach that, or wether LLMs 'are just autocompletion'. It lets me focus on the things that matter more in my work. It still has a long way to go, but claude code is pretty decent at doing menial jobs for me already
kevindamm · 5h ago
It's a power saw, or screw gun, but not a Tunnel Boring Machine. When the task is big enough, it goes a little off the rails. With guidance, and persistence, you can churn out a lot of code with barely more than supervisory effort.

I somewhat agree with both your and GP perspectives. It's getting more hype than it has earned, and the promise that this path leads to AGI, despite 10× sizes of models yielding diminishing returns on performance. But it's not vaporware, it can produce fluent text faster and cheaper than humans, so it doesn't go in the "why are they buying?" bin with NFTs.

The questions getting lost in the middle is "do we need to churn out even more code and trust that it's been reviewed?" and "is using this going to semi-permanently disable most of the knowledge workers?"

If there's even a chance that my executive functions and related mental faculties are degraded by using LLMs then I would rather not. I try it a little and keep a finger on the pulse of the community that are going all-in on it. If it does transform into something that's 99% accurate and with a knob letting me dictate volume of output, I'll put more effort into learning how to hold it. And hopefully by then we'll be able to confirm or refute any of the long-term side effects.

insane_dreamer · 3h ago
Agreed. But the OP’s point is that LLM are not being presented as a productivity tool for developers, in the vein of an advanced IDE. They’re being presented as the “solution to everything”.
franze · 5h ago
> > Eventually, these companies tried something new: agents. Yeah, that one's still on the hype shelf for me.

Just install Claude Code in Yolo and Sudo (sudo pwd in claude.md) on a server or laptop and just interact with the computer through it.

for me this changed everything

rsynnott · 4h ago
> You could say the same for YouTube (and likely wildly surpass that scenario).

Well, I mean, sure, but no-one's claiming that Youtube's going to fundamentally change the world.

pydry · 5h ago
This hype cycle is definitely more like the original hype cycle of the internet in the late 90s - back when "the high street was going to disappear", "we'd all be hanging out in VR" or that it would "democratize" well, anything.

Clearly a lot did change but most of the bolder predictions still ended up not coming true.

falcor84 · 4h ago
Well, I think that the vast majority of those predictions about the internet did came true, with my main evidence being the extent through which the covid lockdowns didn't harm the productivity of the typical company - I think that the fact that all the operations just moved online would have been unimaginable in the early 90s, except by avid sci-fi readers.
pydry · 3h ago
I would have loved to have seen somebody predict mass wfh only being possible within the confines of a pandemic and being shut down shortly after.

Or that minus a few exceptions like blockbuster the majority of high street /mall stores would still exist in spite of online shopping being given more favourable tax treatment.

Or that democratic institutions would end up being eroded by the toxic spam that popped up when the barrier for entry for publishing was lowered.

JKCalhoun · 2h ago
Yeah, not enough pragmatic cynics. Instead, we probably got the tin-foil hat variety.
JKCalhoun · 5h ago
That's probably always the case.

So often so that I'm somewhat surprised I keep reading articles that say essentially, "Don't believe market-speak from someone who is trying to sell you something."

Yeah, you should never do that.

baq · 6h ago
Foundation models are basically compressed databases of a big chunk (75%?) of human knowledge and the query language is English (...and many others) and you're saying it's a bubble to be ignored.

Meanwhile, model providers are serving millions if not billions of tokens daily.

Don't want to say this is a dropbox comment class blog post, but certainly it... ignores something.

smcl · 5h ago
I think what they're saying is that the whole "This is going to solve everything" hype around LLMs is a bubble. I think it's undeniable LLM technology will persist in some form as it definitely has uses. But I don't think it will be the trillion-dollar industry that is being touted, I don't think many of these companies will survive on their own without being swallowed up by some bigger FAANG entity (heavily dependent, as they are, on VC funding)
tmaly · 59m ago
There is always money to be made with hype. But the amount of investment being made in AI is beyond reasonable in my mind. They will not see the ROI. We have good enough AI right not to increase productivity. I don't see us getting to AGI with the current architecture bearing some new type of breakthrough.
candiddevmike · 5h ago
IMO, LLMs are a neat technical phenomenon that were released to the public too soon without any regard to their shortcomings or the impact they would have on society.
pornel · 5h ago
It's funny that when OpenAI developed GPT-2, they've been warning it's going to be disruptive. But the warnings were largely dismissed, because GPT-2 was way too dumb to be taken as a threat.
akkad33 · 5h ago
It's a way to get free training data
_heimdall · 5h ago
While I agree the model is a huge compressed database of written human text, I think its a stretch to call much of what was scrape off the internet as knowledge and I don't personally see it as using language as a query language.

I expect a query language to be deterministic, and I expect the other end of the query to only return data that actually exists. LLMs are neither of those, so to me they are impressive natural language engines but they aren't ready a tool for querying human knowledge.

singingfish · 6h ago
Yeah, there's useful, but not that useful. It help to think of them as "language extrusion confabulation machines" to understand the actual limitations.
Aldipower · 5h ago
And the article starts with exactly this. The big chunk consists ~50% of stolen data!
stockerta · 6h ago
And those same providers are burning how much money each day with this shit?
monsieurbanana · 5h ago
Because they want to control as much of the market as possible, everyone and their dog is using LLMs for work and mails and groceries.

That doesn't change their usefulness, if tomorrow they all increase the price x10 it will remain useful for many use cases. Not to mention than in a year or two the costs might go down an order of magnitude for the same accuracy.

Applejinx · 6h ago
And carbon. Not just money. There's more externalities to it than just burning money.
jsnell · 4h ago
This genre of writing is quite hard to engage with productively. It is really long due to throwing everything at the wall. Doesn't matter if the argument is obsolete, a lie, unprovable, or actually a legit and we'll researched complaint. Into the article it goes.

For people who want to believe the author, this flooding the zone approach is catnip. They get the confirmation they need. The people who don't want to believe the author will just dismiss the entire article after the first argument they recognize as invalid.

And for the people in the middle, it's impossible to have a good discussion about the article and get a better understanding because there are too many unrelated arguments packed together. Trying to rebut or support just one of them just feels like a waste of time.

If you're trying to write an effective anti-AI article, it really would be way more effective to pick the 1-2 strongest points (whatever they are).

s_ting765 · 3h ago
I was frustrated by the author's article but for reasons different than yours. Their title is clickbait and implies a personal viewpoint. I kept reading to see any personal reasons for why they dislike AI. Everything they mention you can google in 5 mins and read for yourself what other people think about AI.

The only personal reason comes at the end where he says he would never log in to use an AI product. The entire rant could have been a tweet.

JCM9 · 6h ago
The “we don’t need as many workers because AI” line from CEOs isn’t sticking. It might work for a quarter or two but sooner or later these companies must answer hard questions on why they’re not getting measurable returns from massive investments in AI. That day is coming and it’s going to be really ugly.

Same for many of these “AI companies” that are burning through cash in a race to the bottom towards a commodity with no real prospects for a sustainable business model. The tech is cool, and can be useful, but the business aspects of all this is a forrest full of dry timber waiting for one strike of lightning to burn the whole thing to the ground.

conception · 5h ago
I think the thing is that it’s not a massive investment to displace a worker. You can spend ten grand a month on AI tools before you start running into the cost of a single low end tech employee. That’s a lot of AI tools.
JCM9 · 5h ago
The problem with this picture though is you’re spending 10 grand a month on a product someone else is losing 10 grand a month selling you. It’s beginning to look like a house of cards. The finances of many of these companies are on shaky grounds and they claim “profit” without accounting for the true costs of what they’re doing (eg not counting model training in the COGS when the useful life of a model in 2025 is very limited). When investors realize the returns won’t be there and shut off the free cash taps the party will be over.
spacemadness · 2h ago
The problem generally starts with “we need our own nuclear power plants to power this tech.”
padjo · 5h ago
But it’s a bit like saying you can buy an awful lot of hammers for the price of a carpenter.
bbarnett · 5h ago
You must have experienced the last few decades differently than I have.

I've seen crash after crash, all softened with taxpayer bailouts, and economic recovery within a couple of years. Often, to "booming" economies, which just means "compared to during the crash".

Should your crash come to pass, it will be another part of the news cycle, 4% to 8% of people will be out of work for a few months to a year, and nothing will happen to the companies responsible.

In fact, they'll get bonuses for pre-crash performance.

This is how history has played out for decades.

matsemann · 6h ago
I don't have anything particular against using AI tools in my workday in the future. I just don't want to spend my time experimenting with it now, and have no need to be on the bleeding edge. If in a year things are stable and useful, I can just learn whatever is vogue then. You won't miss out by not jumping on the train now. Most tools I use were made long before I started my career as a software engineer, and still I got up to speed. There is no big moat that experienced developers won't be able to quickly get over.
simonw · 5h ago
Author's conclusion here is pretty much where I'm at too:

> In the end, I see AI for what it is: a powerful but limited tool—not a revolution, not a replacement for human thinking, and definitely not something worth worshipping.

How much of a "revolution" it is depends on your field though. I think computer programming is still the field that is most impacted by potential productivity improvements from these tools.

tim333 · 4h ago
There's a difference between current AI which is not a revolution and future AI which does have the potential for something like the industrial revolution.
insane_dreamer · 3h ago
Except that future AI has been predicted for a long time now. Are we really much closer to AGI than we were 10 years ago?
akkad33 · 5h ago
To me it's just a new way of representing information that makes information on the web easily accessible to people. So in a way it's a breakthrough just like Google was a breakthrough. Google's pagerank represented the web as a graph and you could easily go to related pages. Gpts embed concepts in a high dimensional space and there are links between them that makes it easy to query this knowledge représentation. So I don't think it will replace humans so much as disrupt the knowledge industry just like Google
teddyh · 6h ago
Someday, somebody might use “AI” to simulate the viewing of ads, and since AI might perfectly simulate viewing an ad, but will not spend money as a result, the whole ad economy will collapse.
lompad · 5h ago
So what you're basically saying is, we need AI-AdNauseam as quickly as possible.

Thank you for the idea.

Considering how many free offerings there are, this might actually just work.

guappa · 5h ago
Ads industry is already way way overvalued.
bbarnett · 5h ago
You're my hero (gets to work diligently on this)
spacemadness · 2h ago
“We might raise a generation that doesn’t know how to write essays, can’t read and understand long articles, can’t communicate well, and struggles with real relationships.”

I think we entered this reality a while ago from the rampant use of social media, dating apps, and short form content.

bshepard · 6h ago
On a more conceptual angle, Landrebe and Smith's "Why Machine's Will Never Rule the World" clarifies the limits of computation w/r/t complex dynamic systems.

Here is the core argument: "an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.

david-gpu · 6h ago
A denser-than-air vehicle that could equal or exceed bird flight --sometines called an airplane-- is for mathematical reasons impossible, for two specific reasons:

1. Bird flight is a capability of a complex dynamic system -- the bird's musculoskeletal system and its brain.

2. Systems of these sort cannot be modelled mathematically in a way that allows them to operate inside a machine.

dbspin · 5h ago
What would prevent an embodied AI - i.e.: some kind of deep learning system operating a robot full of sensors from representing such a 'complex dynamic system'?

And if the answer is nothing - what would prevent such an dynamic system from being emulated? If the answer is real time data, this can be fed into the 'world model' of the emulation in numerous ways.

edg5000 · 6h ago
This is what I believed intuitively, but the reality is more nuanced. I'd rephrase it to: "We can't create something better than humans". A tractor can outplow a human easily just like an LLM can outwork a human on menial, easily verifiable tasks.
crthpl · 5h ago
If humans were too complicated to be mathematically modelled, then we wouldn't exist.
snapcaster · 5h ago
Why would someone believe that to be true?
siwatanejo · 6h ago
That core argument reads like a word salad, no offense.
jiggawatts · 5h ago
This kind of argument is nonsense. It boils down to: "This previously solved problem is unsolvable."

The previous solution is a biological brain, and the future solutions are mechanical, but that doesn't matter. Even if it did, such arguments involve little more than waving one's hands about and claiming that there's some poorly specified fundamental difference.

There isn't.

bshepard · 5h ago
I would advise those to read the book and grasp the argument before 'rebutting' it.
ozim · 5h ago
Author is a software dev and still 90% of what he understands from it is “there is a box you ask smart machine silly questions and it answers like human does or can generate a funny cat”.

There is more to it but it requires more effort to learn and see for yourself instead of repeating laundry list of “why is it bad” things posted by all the other AI doomers.

franze · 5h ago
as someone who is running claude code as admin (with sudo pwd in claude.md) on my laptop and on a server - and it completely changed how I interact with computers - I honestly can not fathom the ongoing dismissal of AI as the one thing that will changes everbodies future.

the jobs killed by AI is for me mostly zero sum thinking in action, also all AI companies seem to be just one AI company instead of a (currently) healthy competition. and apparently all users are duped cause they suddenly are up to a 100x more productive.

and yes, the energy needed is horrible (is AI now over bitcoin or still below).

the anecdotes ad hallucinations are true, but what about the success stories? ie faster vacation research? the fact that now everybody with chatgpt has now access to a world class team of doctors (where perviously often there was none).

yes, criticism of AI and AI companies is good and necessary. said that: even if all technological progress would stop right now it is the most meaningful technological change I experienced in the last 25 years I am working it/web stuff.

lvl155 · 5h ago
The hype is on OpenAI not AI. AI is nothing new. It’s been around for a very long time and it will continue to get better as a function of data and compute. Remember the dotcom days? People got hyped on Netscape, AOL, Cisco, etc. There will be plenty of those this time around too.
reedf1 · 5h ago
I have my own misgivings about AI - but this article certainly misses the mark.
pknerd · 5h ago
Will be using AI to summarize this HN disucssion and finding insights.
brookst · 6h ago
This is a terrible article, apparently one of those that starts with a title and then goes in search of every possible argument to support the conclusion.

The whole thing is bad and disingenuous (somehow the very real impact of excessive crawling by AI companies in an indictment of the value of the output).

And it just gets worse. For instance:

> If you’re looking something up [on a search engine], you usually type a few keywords and get a list of links. But with a chatbot, you have to write full sentences, and how fast you can type limits how fast you can interact. Then, instead of getting quick, scannable links, you get a big block of text. You read it—but you’re always aware it might be wrong. On a regular search engine, you can judge a source just by looking at the domain of the website, the design of the page or even reading the “About” page.

Got that? Chatbots make you type whole sentences, and instead of a short list of links the reader can easily scan, click through, analyze quality of graphic designs, and read the obviously-totally-trustworthy About pages to determine accuracy… you get answers in one place that could be wrong.

The fact that all chatbots include citations that you can click to do the same rigorous design-based fact checking is omitted, presumably because it would weaken the argument.

There are legit reasons to dislike the ethics of AI companies, and there are legit reasons believe this is a dot bomb style bubble, and there are legit reasons to be skeptical that the tech has enough headroom to reach AGI.

But this article just puts little bits of each in a blender and hopes for the best. It’s funny because while decrying “hype”, it uses all the same cheap and lazy rhetorical techniques as the worst AI hypesters. Further illustrating the “you become what you hate” principle, I suppose.

jiggawatts · 5h ago
> Chatbots make you type whole sentences

I've noticed that I've solved quite a few problems simply by being forced to spell out the precise problem statement to an AI bot. I knew the answer as soon as I had finished typing the question out in full, and watching the AI confirm my suspicions was superfluous but gratifying.

I also now feel guilty for not providing the same level of detail to other people that I've tasked with something.

edg5000 · 5h ago
> I also now feel guilty for not providing the same level of detail to other people that I've tasked with something.

Relatable! It's because long messages are for uncool nerds. Cool people write short ambiguous messages. But in the presence of AI, we let go of our shame and write to our hearts content!

edg5000 · 5h ago
Agreed!
jaxr · 5h ago
Lost me un the first section. It's like when anti-vaxers say vaccines are bad because they were developed unethically. It's just a bad argument.

Also, I learned about Bitcoin when it was worth 8 USD and got obsessed with the tech but always thought it was over hyped. I never owned one Satoshi. I still think crypto ended up being hype and not adding real value to the world. But I could be very very wealthy if I had jumped to that hype train XD.

I think with all the hype, AI does provide some real value to the world. That's the train I'm jumping in.

moconnor · 5h ago
> I use them myself... I don’t pay for them.

This seems to be a common disconnect. If you're using the free version of ChatGPT, you don't get to see what everybody else is seeing, not even close.

> None of the past “big things” were pushed like this. They didn’t get flooded with billions in investment before proving themselves

Oh, sweet summer child ^^ I assume Mert was not around to witness the internet boom and bust. Exactly this happened.

There is a lot of conflation in this article. It cites a lot of ethical concerns around the sourcing and training of data, expected job losses and the issues around that, but those are not reasons to doubt the _efficacy_ of AI. There are surprisingly few and weak arguments as to why the hype is not justified, presumably because the author hasn't used powerful models (see above).

It's possible to believe the hype is real and still to find AI unethical. But this article just mixes it all into a big pot of "AI bad" without addressing the cognitive dissonance required to believe both "AI is not very useful" and "AI will eliminate problematic numbers of jobs".

edg5000 · 6h ago
Google has been crawling the web. I think it's about limiting to 1Hz per IP. This means a site with 100 000 pages will take 24 hours to crawl. The crawler could send requests as fast as possible, just distributed across IPs.

If I put a website up, everything goes as long as people don't DDoS me. A human crawls at ~1Hz, if a bot tries 1000Hz, this is a denial of service attack. It's hard to block since you can't rate limit IPs due to many people sharing the same IP. So you need heuristics, cookies, etc.

Putting paywalled content in the AI is not cool though (such as books), nobody was anticipating this, people got effed unexpectedly. This is piracy on hyperscale. Not fair.

No comments yet

dizhn · 5h ago
You have to kind of bury your head in the sand to claim it is all hype.

> They’re just tools. If they disappeared tomorrow, it wouldn’t affect how I work or live.

This sounds like prepper mentality? Or is it more objectively sound like quitting Facebook?

sega_sai · 5h ago
To be honest, this is getting tiring, every day we have multiple articles on HN how AI is all hype, 'stochastic parrots' and so on. I think it is becoming fashionable among some people to be a AI/LLM sceptic.

Certainly there are people who overhype AI/LLMs. Also any discussion of AGI is a mere speculation at this point, but you can't deny that LLMs are revolutionary tools, and we are still learning their limits. I find it bizarre when people deny that.

brap · 6h ago
So what I got from this article was:

1. They train on website data without permission

2. They require a lot of electricity

3. Kids use them to cheat on homework

And honestly this is where I stopped reading. I’m the biggest AI hater but none of these are good arguments against AI (I would argue that #3 is actually an argument for AI). If this is what you’re leading with then I’m not particularly interested in reading the rest.

edg5000 · 5h ago
What do you hate about it? Because LLM companies take away the hardfought control gained by the the individual from their moated castle?
kyykky · 5h ago
Do you prefer that kids don’t learn anything?
insane_dreamer · 3h ago
> But here’s the thing—none of the past “big things” were pushed like this. They didn’t get flooded with billions in investment before proving themselves.

I think the DotCom bubble would fit the above description (smaller numbers of $, but similar hype).

sneak · 6h ago
> Some companies, like Meta, went even further. They didn’t just use web content—they also used pirated books to train their models. (The Unbelievable Scale of AI’s Pirated-Books Problem) If a regular person did this, they’d probably get into serious trouble. But when billion-dollar companies do it, they usually get away with it.

Someone should tell Anna’s Archive.

The US’s criminal enforcement is very much biased into the “rules for thee, but not for me” category, but invoking it here is a trope. Anyone can get away with piracy on the scale of Books3 or The Pile. The reason random people don’t make models is because the hardware and power costs are fucking astronomical, not because they can’t get away with downloading the training data.

These sort of hot takes are just as wrong as the breathless “AGI is right around the corner” ones.

AI is hugely transformative, and anyone who thinks it’s overhyped doesn’t know the SOTA. It will likely be the single biggest technological advancement of our lifetime.

mossTechnician · 5h ago
Everybody knows the name of the CEO of Facebook, and the company has gotten away with manipulating public dialogue and monopolizing the social sphere for years. I don't know the name of Anna's Archive doesn't have that political capital.
Calavar · 6h ago
> Anyone can get away with piracy on the scale of Books3 or The Pile.

The obvious counter example would be Aaron Swartz

sneak · 5h ago
1. He did it non-anonymously as a form of activism, which seems like an obvious bias toward martyrdom. An argument can be made that he chose fame over effectiveness, just like Assange did.

2. We don’t know if he would have gotten away with it or not. Mental illness killed him via suicide, not the federal indictment.

There are several EXTREMELY large pirate libraries in operation presently that anyone can use. They are actively getting away with it, likely because they are explicitly staying anonymous.

edg5000 · 6h ago
No, I think the book pirating thing is different than an individual pirating. It's like comparing genocide to murder. A few murders, not great. A few genocides? Really not cool.

This book thing at Meta is something we should never forget. It revealed how utterly broken the US is in this regard, hope they get it sorted. Without the rule of law you'll get a shit country.

danielscrubs · 5h ago
It helps having China as a bogeyman, if WE don’t pirate then China will win the AI race, and we will all be slaves… if you have principles you are a communist…
sneak · 5h ago
Prosecutorial discretion isn’t counter to the rule of law. The state gets to selectively apply criminal penalties.

This has always been the case.

By your definition, the US has never really had the rule of law.

brookst · 5h ago
But change and disruption are scary, and people have nearly infinite capacity for denial and self delusion.

Maybe if we all pretend AI is totally useless and will never improve, then I won’t have to worry about my job or economic value changing?

hoseja · 6h ago
"Waaah waah they didn't ask permission to read the internet"

Meanwhile unintentionally ultrarlhfrelevant title photo.

feizhuzheng · 6h ago
wait until the real gold comes out