Don't have time to watch a 42m vid now, but I can see how people are starting to view ChatGPT (and similar models) as some miraculous oracle, of sorts. Even if you start using the models with your eyes wide open, knowing how much they can hallucinate, with time - it is easy to lower your guard, and just trust the models more and more.
To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.
graemep · 6h ago
Oracle is a better word for this than religion for what you are talking about. Maybe people should remember how notoriously tricky oracles were even in their believer's eyes (the "an empire shall fall" story.
This video is about people who believe ChatGPT (or another LLM) is a sentient being sent to us by aliens or the future to save us. LLM saviour is pretty close to a religious belief. A pretty weird one, but still.
> o get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat.
I have tried this a bit with ChatGPT, and yes, there are a lot of issues. Things such as literally true but misleading answers, incomplete information, and a lack of commonsense.
adlpz · 6h ago
It's a bit like general web browsing.
The internet is full of pure nonsense, quack theories and deliberate fake news.
Humans created those.
The LLMs essentially regurgitate that, and on top they hallucinate the most random stuff.
But in essence the sort of information hygiene practices needed are the same.
I guess the issue is the deliver method. Conversation is intrinsically felt as more "trustworthy".
Also, AI is for all intents and purposes already indistinguishable from magic. So in that context is hard for non-technical people to keep their guard up.
grues-dinner · 4h ago
Moreover, one they get into the wrong track, they just dig in deeper and deeper until they've completely lost it. All the while saying how clever and perceptive you are for spotting their fuck ups before getting it wrong again. It seems like if it doesn't work pretty much first time (and to be sure, it does work right first time often enough to activate the "this machine seems like knows its stuff" neurons) you're better off closing it and doing whatever it is yourself. Otherwise, before long you're neck-deep in plausible-sounding bullshit and think it's only ankle deep. But in a field you don't know well, you don't know when you're going below the statistical noise floor into lala land.
tim333 · 25m ago
From the video it's not becoming a religion so much as telling people what they want to hear on an individual basis, like they are the new messiah or whatever. I guess it's not much madder than conventional religion.
kylehotchkiss · 28m ago
I don't think LLMs specifically are becoming a religion, but I think the way some people look at/speak about AGI and its impact on the world has become a new religion. Especially when paired with UBI solving the unemployment problems it could create, which is so far from human nature that I think is even less likely than AGI.
I philosophically don't think AGI as described is achievable because I don't think humans can build a machine more capable than themselves ¯\_(ツ)_/¯ But continuing to insulate it'll be here in a few months sure helps put some dollars in CEOs pockets!
cainxinth · 5h ago
Humans are pattern recognition machines, and missing a pattern is generally more dangerous than a false positive, hence people notice all kinds of things that aren’t really there.
Functionally, it’s similar to why LLMs hallucinate.
paradox242 · 5h ago
Imagine what happens if we awaken an actual god (AGI or ASI depending on your definition). I have no doubt that it would have any trouble enlisting the help of willing human accomplices for whatever purposes it wishes. I expect it would understand how to play the role of the unknowable all-knowing entity that is here to save us from ourselves, no matter what it's actual objectives might be (and I doubt they would be benevolent).
upghost · 5h ago
> We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them[1]
Good video essay. Learned the origins of the term "cargo cult", and to my surprise, has nothing to do with rust...
I take a lot of the reports with a grain of salt. But also, knowing how easily some people are hypnotized by what they perceive as superior intellects, it's totally conceivable. There is a segment of the population with a strong savior-following instinct.
Prior to, activating this population required a high IQ/EQ psychopath to collect followers, or schizophrenic's who believed they were talking to a superior being ('my leader talks directly to me via his writings').
Now however, people can self-hypnotize themselves into a kind of self-cult. It might be the most effective form of this phenomenon if it's highly attuned to the individuals own idiosyncratic interests.
In a typical cult, people fall into or out of the cult based on their internal alignment with the leader and failed enlightenment. But if everyone of these people can have their own highly tailored cult leader, it might be a very hard spell to break.
social-relation · 7h ago
It's sometimes said in social theory that mundane phenomena like money, internet routers, and code are social relations. Chats are not simply conversations with static models, but rather intensely mediated symbol manipulation between conscious people. The historical development is interpetable in spiritual terms, and called to account by the truly religious, or god.
lioeters · 4h ago
> money, internet routers, and code are social relations
Could you recommend some further reading to dig into this insight?
Also I'm curious why you created such a topic-specific user, I guess for privacy?
alganet · 8h ago
In the wise words of the prophet Stevie Wonder:
When you believe in things that you don't understand, then you suffer.
michaelsbradley · 7h ago
Care to elaborate?
alganet · 5h ago
I won't elaborate on the Stevie Wonder quote. I think it's perfect the way it is.
--
I can, however, elaborate on the subject separately from that quote.
The video talks about the more extreme cases of AI cultism. This behavior follows the same formula as previous cults (some of which are mentioned).
In 2018 or so, I noticed the rise of flat earth narratives (bear with me for a while, it will connect back to the subject).
The scariest thing, though, was _the non flat earthers_. People who defended that the earth was round, but couldn't explain why. Some of them tried, but had all sorts of misconceptions about how satellites work, the history of science and so many other mistakes. When confronted, very few people _actually_ understood what it takes to prove the earth is round. They were just as clueless as the flat earthers, just with a different opinion.
I believe something similar is happening with AI. There are extreme cases of cult behavior which are obvious (as obvious as flat earthers), and there are the subtle cases of cluelessness similar to what I experienced with both flat-earthers and "clueless round-earthers" back in 2018. These, specially the clueless supporters, are very dangerous.
By dangerous, I mean "as dangerous as people who believe the earth is round but can't explain why". I recognize most people don't see this as a problem. What is the issue with people repeating a narrative that is correct? Well, the issue is that they don't understand why the narrative they are parroting is correct.
Having a large mass of "reasonable but clueless supporters" can quickly derail into a mass of ignorance. Similar things happened when people were swayed to support certain narratives due to political alignment. The flat-earthism and anti-vaccine pseudo nonsense is tightly connected to that. Those people were "reasonable" just a few years prior, then became an issue when certain ideas got into their heads.
I'm not perfect, and I probably have a lot of biases too. Narratives I support without fully understanding why, probably without even noticing. But I'm damn focused on understanding them and making that understanding the central point of the issue.
To get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat. Some topics are of course easier for these than others.
This video is about people who believe ChatGPT (or another LLM) is a sentient being sent to us by aliens or the future to save us. LLM saviour is pretty close to a religious belief. A pretty weird one, but still.
> o get a reality check, open up 3-4 different models (ChatGPT, Claude, Gemini, etc.), and ask them topics you know really well, and questions you already know the answers to. And see that maybe a quarter, or 25% will fail somewhat.
I have tried this a bit with ChatGPT, and yes, there are a lot of issues. Things such as literally true but misleading answers, incomplete information, and a lack of commonsense.
The internet is full of pure nonsense, quack theories and deliberate fake news.
Humans created those.
The LLMs essentially regurgitate that, and on top they hallucinate the most random stuff.
But in essence the sort of information hygiene practices needed are the same.
I guess the issue is the deliver method. Conversation is intrinsically felt as more "trustworthy".
Also, AI is for all intents and purposes already indistinguishable from magic. So in that context is hard for non-technical people to keep their guard up.
I philosophically don't think AGI as described is achievable because I don't think humans can build a machine more capable than themselves ¯\_(ツ)_/¯ But continuing to insulate it'll be here in a few months sure helps put some dollars in CEOs pockets!
Functionally, it’s similar to why LLMs hallucinate.
Good video essay. Learned the origins of the term "cargo cult", and to my surprise, has nothing to do with rust...
[1]: https://youtu.be/zKCynxiV_8I?t=26m04s
Prior to, activating this population required a high IQ/EQ psychopath to collect followers, or schizophrenic's who believed they were talking to a superior being ('my leader talks directly to me via his writings').
Now however, people can self-hypnotize themselves into a kind of self-cult. It might be the most effective form of this phenomenon if it's highly attuned to the individuals own idiosyncratic interests.
In a typical cult, people fall into or out of the cult based on their internal alignment with the leader and failed enlightenment. But if everyone of these people can have their own highly tailored cult leader, it might be a very hard spell to break.
Could you recommend some further reading to dig into this insight?
Also I'm curious why you created such a topic-specific user, I guess for privacy?
--
I can, however, elaborate on the subject separately from that quote.
The video talks about the more extreme cases of AI cultism. This behavior follows the same formula as previous cults (some of which are mentioned).
In 2018 or so, I noticed the rise of flat earth narratives (bear with me for a while, it will connect back to the subject).
The scariest thing, though, was _the non flat earthers_. People who defended that the earth was round, but couldn't explain why. Some of them tried, but had all sorts of misconceptions about how satellites work, the history of science and so many other mistakes. When confronted, very few people _actually_ understood what it takes to prove the earth is round. They were just as clueless as the flat earthers, just with a different opinion.
I believe something similar is happening with AI. There are extreme cases of cult behavior which are obvious (as obvious as flat earthers), and there are the subtle cases of cluelessness similar to what I experienced with both flat-earthers and "clueless round-earthers" back in 2018. These, specially the clueless supporters, are very dangerous.
By dangerous, I mean "as dangerous as people who believe the earth is round but can't explain why". I recognize most people don't see this as a problem. What is the issue with people repeating a narrative that is correct? Well, the issue is that they don't understand why the narrative they are parroting is correct.
Having a large mass of "reasonable but clueless supporters" can quickly derail into a mass of ignorance. Similar things happened when people were swayed to support certain narratives due to political alignment. The flat-earthism and anti-vaccine pseudo nonsense is tightly connected to that. Those people were "reasonable" just a few years prior, then became an issue when certain ideas got into their heads.
I'm not perfect, and I probably have a lot of biases too. Narratives I support without fully understanding why, probably without even noticing. But I'm damn focused on understanding them and making that understanding the central point of the issue.