Why does AI feel so different?

51 ath_ray 83 8/13/2025, 5:12:58 AM blog.nilenso.com ↗

Comments (83)

donperignon · 8h ago
It’s the first revolutionary technology that his first goal is to make money for VC investors instead of really change the world, the perceived benefit for society is just a sub-product of the ponzi scheme behind those VC firms.
JohnKemeny · 8h ago
Exactly. The tech itself isn't driving the mission --- the funding model is. "AI" right now feels less like the internet in ’95 and more like crypto in 2021: massive hype, vague promises, and any social benefit is incidental to keeping the capital flywheel spinning.
akoboldfrying · 6h ago
I simply don't agree. I've personally benefited from asking Gemini questions almost every day; I never benefited from crypto, even though I find the mechanisms intriguing and the design beautiful.

It's true that social benefit is incidental to the development of nearly all the big AIs, but that applies to every venture in capitalism.

helsinkiandrew · 8h ago
> It’s the first revolutionary technology that his first goal is to make money for VC investors instead of really change the world

Is that true?

Steam technology and factory machinery were created and used to make the pit, mill, and factory owners more money than with human workers. The railway mania in the 1840s was a period where vast sums of money were invested in railway lines and engine technology.

https://en.wikipedia.org/wiki/Railway_Mania

akoboldfrying · 6h ago
Exactly. This idea that AI is unique in making its backers money is ridiculous. Everything in capitalism has always worked this way.
AstralStorm · 3h ago
The question is, what is the actual paradigm that's being shifted here? Employing people?

It's definitely not wanting to talk to people, and even employment is a bit of a practical joke for now.

Comparing with Internet or electricity here is ridiculous thus far as well.

If we had actual intelligence instead of pretty nifty text robots, then it would be perhaps more true.

roenxi · 8h ago
The list of technological examples in the article are all notable for their economic effects and the first one was enabling factory jobs. The AI revolution is, so far, actually pretty economically mild, small and unprofitable for a tech revolution. Compare it to the industrial revolution was transformation on a scale that is outside the modern experience.

Of course, the assumption is it is going to be a lot bigger as the decades give it time to fully roll out. It is impossible to guess precisely but it'll probably be bigger than the industrial revolution by the time all the 2nd order effects start settling in. We just don't know what it looks like when the entire management and political decision making apparatuses are suddenly stuffed with counterintelligence. It'll be a shock to the system compared to the current leadership.

eru · 8h ago
The dot-com boom of the late 1990s also had plenty of VC money.

Back then and today, you find plenty of true believers, and plenty of people who are in it for the money.

Similar, when people financed eg canals and railways or factories in the past, many participants cared about the money.

(Or are you saying it's not about the money, but specifically VCs?)

mna_ · 7h ago
>Similar, when people financed eg canals and railways or factories in the past, many participants cared about the money.

Those things were tangible and their advantages/disadvantages more immediate than something intangible like ``AI''.

eru · 5h ago
Hindsight is 20/20. Past breakthroughs look obvious and immediately beneficial in retrospect.

Btw, there were plenty of people who thought electricity or the internet were just toys or fads.

AstralStorm · 3h ago
True, but original electricity stayed the same, even the turbine generators still in use are very similar. (Same with steam power, still in use today.) Batteries are improved but general operation is the same. There's many more devices using it, mostly for the better.

We don't even know if we want LLMs in more devices. Likewise we likely do not want Internet/email in every toaster.

LLMs staying the same would be the death of this technology. Even minor improvements don't matter enough.

Voice activated devices are not necessarily paradigm shifting either. We had these for over 25 years now.

w4 · 36m ago
> It’s the first revolutionary technology that his first goal is to make money for VC investors instead of really change the world

Biotech. Mobile apps. Social media. The dot com bubble. Personal computing. Semiconductors. The cable wars. The tronics boom. Telephony. Electrification. Industrialization. Railway mania. Canal mania. Maritime trade. And so on.

There's nothing new under the sun. All of these revolutionary technologies had a first goal of making money. All of these changed the world as a byproduct of bubbles and quasi-ponzi schemes. Even the visionaries who are the subject of post-technological revolution hagiographies wanted to get rich. Isaac Newton famously tried to get rich in one of these bubbles (and failed spectacularly).

It's just human nature, and it's not surprising that AI is following the same pattern.

mykowebhn · 7h ago
100 percent. The same thing happened with the "Internet". It was amazing when it first started getting popular with the first web browsers in the 90s. There was so much potential with it.

Now look what's happened to the Internet.

akoboldfrying · 6h ago
It still can be amazing, more amazing than it was in the 90s. You could make a website this afternoon and host it on your phone. But ~nobody can be bothered, because easier centralised platforms exist.

The problem, if you want to call it that, is purely social/cultural.

mykowebhn · 4h ago
Have you ever heard of Geocities?
thefz · 4h ago
Not even a revolutionary technology, it's just goind to speed up web searching for those who can't and be a hindrance for those who already do.
JackYoustra · 8h ago
What's your proof?
JohnKemeny · 8h ago
I recommend this Better Offline episode:

https://www.wheresyoured.at/the-haters-gui/

__loam · 8h ago
Literally every business school and consulting firm wrote articles about how many jobs this stuff was going to displace and how much it would grow the gdp by.
aredox · 8h ago
Oh, I would add cryptocurrencies and NFTs to that list.
badgersnake · 8h ago
Your threshold for revolutionary is pretty low.
mm263 · 8h ago
It feels deeply wrong to put Kahneman in the same list as Socrates, Aristotle, and Bacon.
actionfromafar · 8h ago
A category error, even.
nottorp · 8h ago
Does it feel different? Every time i use a LLM it feels like I'm talking to a submediocre marketer or tv script writer ...
334f905d22bc19 · 7h ago
>It’s a paradigm shift in accessing knowledge

This statement and similar ones along the line have a very bitter taste to me. Someone has to create the knowledge. Someone has to create the actual training data for others to be received in dumbed down and stripped out version through AI. You will always be late to the game when you try to gain knowledge that way. I really really don't understand why people overhype the learning part of AI so much. Yes, it's a neat tool, but in my opinion rather bad for __actual in depth__ learning

crystal_revenge · 8h ago
AI feels different because quietly we are all aware, to differing degrees of comfort, that the world around us is collapsing. We see the escalating effects of climate change impacting day-to-day life in ways we were told would only happen in the far future, witnessing genocide become a normal part of the news critique of which can mean serious personal consequences, we're watching the United States transform into State Capitalism [0] making our country far more like modern day China than many of us would have ever imagined possible.

And we deeply need to believe something wonderous, whether delightful or terrifying, is also happening. I honestly think AI is largely mass hallucination which is supported by a system that is able to extract tremendous wealth from this hallucination continuing.

We can no longer believe that everything happening is normal, so we need to inject some, slightly less horrific, explanation for what we're experiencing.

0. https://www.wsj.com/economy/the-u-s-marches-toward-state-cap...

uncircle · 3h ago
Excellent insight, would explain why opinions of an AI-driven future tend to be very extreme. Assuming the vast majority of us see a world in rapid collapse, that not even politicians have a decent answer to [1], either AI is the shining technological hope that drives us to a better future, or it is yet another acceleration towards the singularity of shit our modern societies have been crumbling towards.

As an extreme AI doomer, I can now respect and understand the position of AI optimists: they are both expressions of the same critique towards the status quo. What scares me then are those that feel the world is fine, we’re not collapsing, and AI is just another technology that is here to stay but won’t amount to much, and we’re all gonna be alright.

1: see HyperNormalisation by Adam Curtis

tempodox · 7h ago
> And we deeply need to believe something wonderous, whether delightful or terrifying, is also happening. I honestly think AI is largely mass hallucination which is supported by a system that is able to extract tremendous wealth from this hallucination continuing.

+∞. Hardly anyone wants to acknowledge that. Humans never really stopped wanting to believe in miracles and “AI” is the perfect idol.

FranzFerdiNaN · 8h ago
China manages to have long term planning. The only planning Trump and co do is planning how to line their own pockets.
uncircle · 2h ago
A good, albeit controversial, argument in favour of monarchy I’ve heard recently is that a monarch has to plan ahead for decades, if not centuries, to ensure them and their dynasty can keep ruling over a prosperous country.

A democratic politician only has to care about the next 4 years, at best; if any issue requires long-term planning, all you can do is hope it fixes itself.

eru · 8h ago
China is a country. Lots of people live there. Some of them have long term planning for some aspects. Some of them don't.
eru · 8h ago
> AI feels different because quietly we are all aware, to differing degrees of comfort, that the world around us is collapsing.

We are living in an unprecedented golden age of peace and prosperity. What collapse are you talking about?

Yes, climate change is gonna knock a few percent points off global GDP, but it's not gonna be the end of the world. It's no worse than living in the UK rather than the US in these terms.

Keep in mind that this impact comes super imposed the our regularly scheduled growth.

We also also plenty of relatively low cost mitigation strategies. (Though many of them are not currently politically feasible.)

> [...] we're watching the United States transform into State Capitalism [...]

Hard to believe, but luckily there's more than one country on the planet.

Please vote with your feet!

politelemon · 7h ago
> We are living in an unprecedented golden age of peace and prosperity.

Golden ages are fictional constructs, applied retroactively as part of an agenda. There is no such thing as living in a current golden age without knowing the future.

I also doubt the statements about peace and prosperity. These just sound like pithy phrases to dismiss concerns.

eru · 7h ago
> Golden ages are fictional constructs, applied retroactively as part of an agenda. There is no such thing as living in a current golden age without knowing the future.

That's a weird self contradiction. I just contemporarily applied the Golden Age moniker, so your assertion is just plain wrong.

People have never been as rich as today, even poor people. Wars are a bit bursty, but if you smooth it out a bit, but the probability for an average human to become a casualty in a war has been going down over the centuries, and is still going down over the last decades.

mna_ · 7h ago
>We are living in an unprecedented golden age of peace and prosperity. What collapse are you talking about?

War in Ukraine, Israel-Palestine conflict, the recent India-Pakistan conflict, China rapidly ramping up its military and openly carrying out drills that are clearly designed to be practise runs for a Taiwan takeover, China asserting its presence near the Phillipines militarily, European countries ramping up defense spending, and so on, are clear indicators that the world is not going through a golden age of peace.

ec109685 · 8h ago
These are poor examples: > Why do I need to know what quarks are, or how steam engines work? Why do I need to know anything about anything? I can just look it up with AI

Literally all possible with Google and YouTube.

thefz · 4h ago
> Literally all possible with Google and YouTube.

Literally all possible with school

eru · 8h ago
And a bit earlier with books and libraries.

Over time, we reduced friction and cost.

tannhaeuser · 7h ago
Go to Google Search and literally ask "How do steam engines work?", and an AI-generated answer is exactly what it will show you, along with a Wikipedia link/excerpt followed by an endless list of links to clickfarms with lede talking about steam engines in general terms to lure you onto their ad-cluttered pages (btw, Duck Duck Go/Bing is just the same).

Shouldn't you rather say "go to Wikipedia straight", or push for Wikipedia to improve their search with all their money already? Isn't a choice of using local open-weight LLMs not actually progress over "googling" and Google's rotten algorithms, or even freaking YouTube and their even worse ad feeds?

So what's your point (I'm tempted to insert "boomer" here)?

corentin88 · 7h ago
It’s one thing to provide hints of a response (10 blue links) and the response (AI).
tannhaeuser · 7h ago
Except it's all a show when those link to generated AI slop next to hundreds or even thousands of trackers and ads anyway, most prominently Google Ads and Google Analytics, which is why they're ranking and displaying at the top in the first place.
bgwalter · 7h ago
If you are using the local LLM as a sort of Encarta (another software that failed) it might work, as long as you are fine with shallow answers.

Proper research that compares many different pages still requires the regular Google search. Google LLM summaries are pathetic, Perplexity and Grok have to be nudged to the point that I feel I am training them with links obtained by Google.

spiderfarmer · 8h ago
I recently saw a good example of how LLM's can be really useful.

A friend of mine, who always found learning difficult, now works as a plumber and installer with his own one-person company.

One of his clients is the local community pool. He took over the job from an older man who showed him how everything worked. My friend noticed a lot of inefficient (manual) use of energy and resources, but the previous installer told him it would cost too much to automate, so nothing was done.

The other day we talked about AI, and he said he had never tried it. I asked him to think of some questions from his daily work. He came up with questions about the pool system and how to automate it.

ChatGPT answered his questions correctly, and after a few follow-ups, it even provided a list of the parts he would need and a plan to make it work. Last weekend he told me that, together with the previous installer, he made improvements that should save the pool thousands of euros over the next few years.

Of course, an experienced pool expert might have come up with the same solution, but it is often difficult to convince a committee to hire someone when nothing is broken, the budget is already set, and the inefficiencies are not obvious.

My friend now uses ChatGPT several times a day to solve practical problems. It helps him learn new things and explore possibilities he might not have considered otherwise.

I do not mind if ChatGPT makes the occasional mistake or gives an answer based on outdated information. As long as it is used as a tool to improve your own abilities, the subscription is worth the cost.

When I worked with colleagues, I sometimes relied on advice that turned out to be wrong. Everyone makes mistakes, even when working to the best of their abilities, and I find that my AI 'colleagues' make fewer of them.

yobbo · 7h ago
This is an interesting example but it can't be scaled up to have meaningful impact on the level of the economy or society.

The effect right now of LLMs is reducing friction for some people with certain problems, but big important problems are already optimised to much higher levels.

spiderfarmer · 7h ago
I disagree completely.

You seem to believe the venture capital money machine that tells you technology and innovation must always be a major disruptive force and solve big, important problems.

Most problems we encounter in daily life are small. If a technology helps solve them faster, at a lower cost, or in a way that was not possible before, I believe it can scale very well. People just have to get accustomed to it.

For example, I fixed my lawnmower recently with the help of ChatGPT. I could have gone to the dealer or asked my brother for help, but instead it was a quick five minute fix that saved me from bothering anyone or spending time searching through manuals and videos for the answer. It literally couldn't have been fixed faster.

I prefer that over augmented reality goggles and other "Big Ideas".

yobbo · 5h ago
It sounds like we agree? LLMs are great for finding fixes for lawnmowers and other things. Before, easier fixes for those problems weren't valuable enough to warrant any investment.

Therefore, current LLMs won't be a complete upheaval like some are fearing.

Things like AlphaFold might create upheavals in their respective fields but they are very specialised. I'm more enthusiastic about that than chatbots.

thunderbong · 7h ago
Thanks for this.

Lots of people, many amongst us here at HN, are critical of AI and LLMs in general. A lot of it has to do with two things, in my opinion - first, with the amount of money being thrown at it, and second, with the hype around it.

Many technical people, myself included, resent money being thrown at a problem rather than finding a proper solution. And we're especially skeptical of the hype cycle, because we've been through multiple hypes which didn't really live up to their claims.

But despite all that, I find regular people, using ChatGPT and other tools, just like any other online tool. Just like all of us rely on some software to find the time on the other side of the world when we have to schedule meetings, similarly, all these people are using these as tools to increase their capabilities and knowledge without the extra cruft that used to be involved in searching the web.

Additionally, AI is literally approachable by anybody who has an internet connection, and that means anyone with a smartphone. This didn't happen with devices, this didn't happen with Napster, this didn't happen with Bittorrent, and certainly not with Bitcoin. Each of these had some hurdle associated with it - cost of device, connectivity, technical knowhow.

But over the years, all those helped lay the foundation of what AI is being seen as now - as a tool that anybody can use. The more savvy among us are also using it to build their own tools as well.

So, yes, this time it certainly feels different.

AstralStorm · 3h ago
So it works as an advanced search engine. Sometimes, for simpler things.

Do you want that in every device or a lot of them, in every home, every service, for super cheap?

What's the paradigm being shifted here?

isoprophlex · 8h ago
"Ah cool a list of historic technologies that caused a paradigm shift... I wonder what second-order effects are to be expected from generative AI!"

> Customer service automation, Agentic coding, Content creation

Fucking hell. Customer service automation. Auto-spaghetti code. Brainrot as a service. Truly what society needs.

rmnclmnt · 8h ago
Agreed. Not what society needs, but what giant corporations and govs need for the society to well behave…

(Edit: typos)

eru · 7h ago
Giant corporations do what their customers, workers and shareholders (and the law) want.

If customers want some customer service, but don't want to enough for it to be good; then very basic customer service is what they are getting.

readthenotes1 · 6h ago
My initial reaction is that it is Kinda sad Kahneman's debunked work was labeled as a pervasive game changer.

https://replicationindex.com/2020/12/30/a-meta-scientific-pe...

sriharis · 34m ago
Author here. Thanks for sharing this, I didn't know about this.
josefritzishere · 36m ago
Because it is the most overhyped service in history. It's an exponential level above normal hype because of the investment cost.
HL33tibCe7 · 8h ago
> What’s going to happen in the next 5 years? Will my skills be relevant? How do I truly add value with AI getting smarter in every way? How does it change life for me and my family?

Here is how I approach this - this might be a coping mechanism, but it’s certainly helped me personally.

LLMs are hugely impressive, no-one is denying that. But they have already been out for nearly 3 years at this point, and there are still massive gaps in their functionality that mean, in their current state, they are nowhere near being able to take over highly skilled work (e.g. software engineering at senior+ levels) from humans. They can handle grunt work well, but are unable to go beyond that: they operate at the level of a (poor) junior.

LLMs remind me to some extent of journalists and the Gell-Mann Amnesia effect. When I ask an LLM a question in an area I am not an expert in, I’m impressed. On the other hand, when I ask it something I am an expert in, I am almost always disappointed. I don’t bother using them for complex work any more, because they lie and hallucinate too much.

Furthermore, the rate of progress appears to be slowing.

All of this gives me some hope that my own life will not be completely ruined as a result of this technology.

JohnKemeny · 8h ago
The article keeps referring to "AI" as if it's a coherent, agreed-upon thing. It's not. "AI" is a marketing term that's been applied to whatever happens to look impressive at the moment---right now, that's LLMs. There's no settled definition, no unified discipline, and lumping everything from generative text models to actual robotics under the same label just obscures the discussion. If we're talking about LLMs, let's call them LLMs.

And let's be real: we're in a bubble. There's zero evidence that LLMs have produced a measurable productivity boom. If anything, they're net negative in many contexts---encouraging shallow engagement, short-circuiting the learning process, and making knowledge workers more dependent on stochastic parrots that frequently hallucinate. We've replaced "look it up" with "ask the oracle," and the oracle can't reliably tell fact from fiction.

The hype cycle will keep burning cash until someone admits the emperor's wardrobe is ... speculative at best.

OtherShrezzing · 8h ago
> There's zero evidence that LLMs have produced a measurable productivity boom

I’m not so sure on this one. AlphaFold2, while not quite an LLM, was based on Transformer architecture - and its implementation wasn’t a million miles away from a language model - and it massively improved the rate of protein structure prediction.

I think in general you’re correct, that we’re in a bubble, but I think it’s too extreme to say the technology is valueless.

JohnKemeny · 7h ago
From what I've read, AlphaFold2 is a minor improvement over other solvers, and only in some cases. It makes many more mistakes and misleading results than classic solvers.

But don't take my word for it, it's way outside of my area of research.

jama211 · 8h ago
Personally I now get my work done in half the time and spend the rest of my newfound time with my family. It’s gonna be hard to convince me that isn’t a fantastic productivity increase.
eru · 8h ago
That's interesting!

Historically (at least for the last century or so) we mostly stopped lowering working hours and instead focused on increased output.

Eg people still work eight hours a day, but get paid vastly more in real terms than in the 1950s. Instead of working only one hour a day (or fewer years or whatever) and taking a 1950s compensation package.

Of course, this is merely a statistical observation. There are plenty of people who eg decide to retire early on a modest nest egg.

actionfromafar · 7h ago
Funny how a 1950s compensation package could pay for a house and family though.
eru · 7h ago
1950s houses are no longer legal in most places.

Both because in the US new construction is basically outlawed. But also because 1950s style houses and goods in general are so shoddy, you can no longer legally produce or buy them. In many cases, no one would even take them off your for free, even if they were still legal.)

On a global scale, the times between perhaps 1940s to 1970s had some of the harshest inequality ever. It's been since about the 1980s that inequality has gone down markedly.

Remember how India and China used to be on the verge of famine (or outright in famine). Nowadays obesity is the bigger problem.

eru · 8h ago
> And let's be real: we're in a bubble. There's zero evidence that LLMs have produced a measurable productivity boom. [...]

You are right that a general productivity boom is hard to detect so far.

But we already have certain industries with more than measurable productivity impact.

A very salient example: first level call centre agents. The kind of unfortunate souls who are paid to man a phone and not allowed to deviate from their script, nor authorised to actually solve your problem.

The companies renting out these services make heavy use of current AI and that has measurably affected hiring, staffing, prices etc.

Even fairly basic AI (like what we have today) is good for this use case.

Does it suck: sure. Does it suck more than the status quo with warm bodies without power: not really. And it's cheaper.

edg5000 · 8h ago
I encourage you to try and really lean on LLMs (Claude Code in particular), it's hard to argue that there is no productivity boost. Athropy in "low level" skills is certainly real though. But our ancestors knew how to repair a pair of socks. Due to industrialisation, we just buy new ones and have lost the skills to repair them. Most people may forget how to do many things in programming, but may still intimately understand, at a high level, what each line is doing.
laserlight · 7h ago
> it's hard to argue that there is no productivity boost

I encourage you to keep up with the literature [0].

[0] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

bamboozled · 8h ago
The more you use it and the deeper you go the more you realise its limitations as a useful software engineering tool and more as an autocomplete.

Is it useful ? Yes? Am I happy it exists yes ? Is it revolutionary. I doubt it.

scotty79 · 7h ago
Year ago with enough prompting it could write a function for me that did something conceptually simple but tedious.

I tried Windsurf over the last 3 days. It can write complex game mechanics logic in multiple files and refactor code when told to. I can talk to it using language of the domain I just made up and I can talk to it using language that developers use when they talk about the code. It can solve bugs that would take me half an hour to figure out at least. It went from a poor junior to clever junior and half a senior in one year. It's not great at one shot architecting yet, but when told how to architect a thing it can move stuff around to conform with the desired architecture. It creates and modifies SVG's, understands colors and directions. It does all of that blind, without running the code. All it has is compiler (language server really) and linter.

You mention it's an autocomplete. I barely used auto-complete at all. I just told it what to do. I touched the code a little bit only because I wanted to, never because I had to. When I wanted a change to the code, 95% of the time I just told it what change I needed and it made it.

It basically replaces about 1/3 of software team already enabling (and pushing me towards) a role of (tech lead/architect/qa/product owner) and this fraction is rising.

It's not flawless. Sometimes it makes things that don't work on the first try. But you don't always need to revert. It's capable of fixing what it made when told about how the desired behavior differs. And all that without running the program.

I anticipate that further development is going to be letting it run and observe what it built so it's gonna move towards QA role. And the other thing might be slowly learning to differentiate between well architected code and badly architected code but that's probably harder as it requires careful preparation (and creation) of training data. While moving towards QA just requires figuring out how to let it run and inspect stuff. Maybe use the debugger.

bamboozled · 7h ago
Are you familiar with the garnet hype cycle? https://www.gartner.com/en/research/methodologies/gartner-hy...

I'm doing everything you're doing and probably more with Claude code (I 'let it run'). I'd say I too have been through a few of the different phases of the gartner hype cycle now.

As I said it's a good tool, it's far from being an "auto-pilot".

scotty79 · 5h ago
I don't care about the hype. If you put me in a cave for last 5 years and today gave me Windsurf and told me nothing beyond that a machine does that, I wouldn't believe you. All of it was literally science fiction 5 years ago. This literally replaces a person that I would have to hire to pursue my ideas.
bamboozled · 1h ago
Ok I guess that's where we differ, I don't really find it to be science fiction, the more I use it the system, the more I understand that magic of it, what it's going to do, and I've learned that to get good results, you absolutely have to drive it like almost every other tool.

I guess if anyone went into a cave and came out for 5 years things would be like science fiction. FPV drones on the battlefield would be another example...

As I said earlier, not downplaying it, really just warning you that if you lean to heavy on it without guiding it quite precisely, you will get burned, properly.

All good, we have different opinions.

satisfice · 8h ago
This strikes me as babbling. 70 percent fever dream.

To take one example, the author speaks of using AI to chat with his favorite thinkers. Well I have favorite thinkers, so why don’t I chat with AI versions of them? Because I know it’s all fake. Whatever substance there is to chatting with a corpse or corpus is unknowable. So it amounts to playing with dolls.

I think for a living. I would LIKE AI to help. But it doesn’t. On balance, any benefit is drowned out by the effort it takes me to check its work.

I’m left wondering if the only people who value GenAI are those who haven’t experienced the satisfaction of thinking for themselves, or who have crippling impostor syndrome.

bm3719 · 8h ago
The thinking-averse outnumber the thinking-inclined. They've tried this thinking thing, and want nothing to do with it. They all went to public school, after all, and the rare times thinking happened it was painful and had to be forced upon them. Their ideal of Being is to react to content, experience positive emotional immediacy, or dull their intellects: all feeling, no thinking.

They'll win in the short term, due to the tyranny of convenience. Then, when all economic value has been extracted from them and the possibility of future value made nil, they'll lose. Of course, that doesn't mean you'll win though.

isoprophlex · 8h ago
It's a negative-sum game that breaks the fourth wall; even not playing makes you lose.
incone123 · 8h ago
They all went to public school?
actionfromafar · 7h ago
Yeah, that stuck out to me too.
Kon5ole · 6h ago
>To take one example, the author speaks of using AI to chat with his favorite thinkers.Well I have favorite thinkers, so why don’t I chat with AI versions of them? Because I know it’s all fake.

That's an uncharitable take of what he actually said, he was clearly talking about being able to get information presented as if it was explained by a particular person. The training data contains everything that person ever wrote, so it might be pretty close. Fake sure, but also often correct and presented in a familiar way.

I've often thought that many concepts I struggled with in school came down to the teacher explaining them in ways that was "incompatible" (for lack of a better word) with my way of thinking for that particular concept. Not that I was special, I think every student experiences this from time to time.

Being able to continue asking questions about dinosaurs or atoms or the roman empire long after any human teacher would get tired will help many a curious kid learn faster than they could before.

>I’m left wondering if the only people who value GenAI are those who haven’t experienced the satisfaction of thinking for themselves

I find this take strange - you still have to think but LLM's help you. Googling things or tracking down a breaking change in version 9.2 from 8.7 or why a regexp is not working correctly is not thinking. It's just busywork.

Spreadsheets removed the satisfaction of "carrying the one" with pen and paper and a whole bunch of valued skills lost value - but it turned out fine. Same with going from assembly to C or C to Java. We ended up with more programmers and more software, not less.

I think once the dust settles LLM's will be a similar change, if perhaps much larger in scope.

qweiopqweiop · 8h ago
Replacing our thinking with a few monopolistic tech companies... What could go wrong!
eru · 7h ago
Where's the monopoly (or rather oligopoly) you are talking about?

There's many models offered by many companies and labs that are close enough to state of the art. Many of them are completely open source or at least have open weights.

You might complain about outsourcing your thinking to a machine, sure. But there's no monopoly nor oligopoly.

qweiopqweiop · 7h ago
Google, currently thought of as the AI leader has shown time and time again they will resort to monopolistic practices.

I really hope open source models are the way, but the fact is the vast majority of day to day usage of LLMs is on models owned by multi-bullion dollar companies.

And even if they're not monopolies, people being influenced by companies to this extent should worry people.

eru · 5h ago
It's news to me that Google is the AI leader. Where did you get that information / impression? I'd assumed that if anyone is a leader, it's OpenAI, but even their lead seems pretty tenuous at best.

And people trying shading things doesn't make a monopoly. Especially if there's plenty of competition.

arielcostas · 8h ago
Not just our thinking, but our search for truth. The biggest problem I see is that models can hallucinate or be biased (for example, Grok being told explicitly not to mention Trump and Musk when asked about misinformation) for the benefit of their owners or creators. This happens with Chinese models too, of course, because of their laws.

The problem is that people just trust what the LLM tells them, not realising they can be misled, or the model tells them "you're right" without any pushback or invitation to think further—just like an echo chamber, the consequences of which we've seen with social media in the last few years

qweiopqweiop · 8h ago
Exactly what I was getting at. They're probably the most powerful tool in human history if you wanted to influence people.

Of course they're exciting from a tech perspective but I can't help but feel people are missing the bigger picture.

SideburnsOfDoom · 7h ago
> What could go wrong?

> "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."

- F Herbert, 1965

I want to emphasise the "other men with machines" part. Mr Herbert wasn't talking about an AI takeover. He was talking about a techbro takeover.

He didn't even believe in the agency of machines, just human potential - an inanimate thing that does not want, will not want to take over. It is a tool for the people who command it and do.