Emergent social conventions and collective bias in LLM populations

41 jbotz 9 5/18/2025, 4:26:58 PM science.org ↗

Comments (9)

Al-Khwarizmi · 36m ago
They make LLMs play a very abstract game that rewards them points from answering the same as the other, and punishes them from answering differently, and LLMs tend to converge to an answer. From that to "social conventions" there is a long, long stretch. The paper lacks a baseline - wouldn't much simpler (non-LLM) systems also exhibit the same property? Is it that surprising that systems that are clones of each other (because they didn't even try "mixed societies" of different LLMs) agree when you give them points for agreeing?

Maybe I'm missing something but in my view this is pure hype and no substance. And note that I'm far from an LLM skeptic and I wouldn't rule out at all that current LLMs could develop social conventions, but this simulation doesn't really show that convincingly.

lostpilot · 2h ago
Crazy - this is saying agents develop their own biases and culture through their engagement with one another even if the individual agents are unbiased.
amelius · 17m ago
I wonder when we will see LLMs being used to test economic theories.
dgfitz · 4h ago
It’s funny how we seem to be on this treadmill of “tech that uses GPUs to crunch data” starting with the Bitcoin thing, moving to NFTs, now LLMs.

Wonder what’s next.

musicale · 1h ago
The twilight of Moore's law and diminishing returns for CPUs are driving a shift to GPUs and other accelerators. GPUs seem to do well for streaming/throughput type workloads.

What's interesting that Nvidia has managed to ride each of these bubbles so far.

kevindamm · 3h ago
Accelerating the calculations done in probabilistic programming languages.
mjburgess · 2h ago
Any evidence this can be done, research literature-wise?
th0ma5 · 4h ago
Oh I thought this was going to be about the cult like in group signaling of LLM advocates, but this is a thing imagining language patterns as a society instead of language patterns of a society being a bias that the models have.
A4ET8a8uTh0_v2 · 1h ago
This sounds dismissive, but it is interesting in two more obvious ways:

1. The interaction appears to mimic human interaction online ( trendsetter vs follower ) 2. It shows something some of us have been suggesting for a while: pathways for massive online manipulation campaigns