The party trick called LLM

17 hirako2000 7 7/16/2025, 6:16:22 PM destaatvanhetweb.nl ↗

Comments (7)

jaredcwhite · 1h ago
Increasingly it's become clear to me that the "hallucinating" chatbots are now causing humans themselves to hallucinate, believing that they are extracting something of real value and real productivity gains from these tools when in the vast majority of cases, it's actually the opposite. We're in fact losing value and the productivity gains are a mirage.

I used to roll my eyes when people would joke "the Internet was a mistake". Now I'm not so sure…

reillyse · 13h ago
Hug of death - although reading Dutch reminds me how close it is to English

Is dit jouw website?

Now listening to Dutch - not so much.

AstralStorm · 11h ago
Hypothesis: LLMs are actually text models. They cannot properly babble with a feel of a language because they don't model that, even less so than a Markov chain.

What they do is post structural text analysis and synthesis.

loguhdihn · 12h ago
I saw a similar take online recently.

They claimed of LLMs:

> It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.

daft_pink · 12h ago
I recognize that a language model is not a living entity, but it excels at translating human language into computer programs. This capability allows users, from novices to expert developers, to interact with computers more effectively and extract greater performance than they could without such a tool, despite the model's occasional errors and somewhat blunt nature. It’s more than just a party trick.
thornewolf · 12h ago
TFA is another iteration on the wonderful semantics debate about intelligence.

a rock is, a typewriter is, a computer is, a human is. all on some varying levels. TFA takes it as a given that "no thinking" happens. I take it as a given that thinking happens unless we prove it cannot be happening.

https://www.thornewolf.com/its-not-intelligent/

once again, both sides of this argument are making equally valid/invalid baseless claims that are all unfalsifiable. it is completely impossible to determine who is "metaphysically" correct. we can only judge outcomes