I can't tell if I'm just getting old, but the last 2 major tech cycles (cryptocurrency and AI) have both seemed like net negatives for society. I wonder if this is how my parents felt about the internet back in the 90s.
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
kohsuke · 10m ago
So they run 5 different experiments to test the hypothesis, and they were nothing like what I imagined.
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
dwringer · 46s ago
[delayed]
cryoshon · 22m ago
To the point of the paper, it has been a somewhat disturbing experience to see otherwise affable superiors in the workplace "prompt" their employees in ways that are obviously downstream of their (very frequent) LLM usage.
shredprez · 7m ago
I started noticing this behavior a few months ago and whew. Easy to fix if the individual cares to, but very hard to ignore from the outside.
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
skeezyboy · 10m ago
Essentially he did a bunch of surveys. Apparently this is science
lordnacho · 17m ago
One very new behavior is the dismissal of someone's writing as the work of AI.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
megamix · 23m ago
How do you guys read through an article this fast after it's submitted? I need more than 1 hr to think this through.
bee_rider · 2m ago
So far (as of 15 or so minutes after your comment) we have only one top-level comment that really indicates that the poster has started trying to read the paper seriously, Kohsuke’s post.
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
jncfhnb · 15m ago
Ask AI to summarize and write a response
skeezyboy · 11m ago
cos its mostly fluff you can skip over
cm2012 · 16m ago
Interesting theory with insufficient evidence
temporallobe · 35m ago
As a Black Sabbath fan, I love that they envisioned dystopian stuff like this. Check out their Dehumanizer album.
cratermoon · 44m ago
I'm unwilling to accept the discussion and conclusions of the paper because of the framing of how LLMs work.
> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
kohsuke · 37m ago
But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.
Isamu · 37m ago
No, I think the thesis is that people perceive falsely that agents are highly human, and as a result assimilate downward toward the agent’s bias and conclusions.
That is the dehumanization process they are describing.
chrisweekly · 32m ago
+1 Insightful
Your "timmy" post deserves its own discussion. Thanks for sharing it!
kingkawn · 33m ago
The paper literally spells out that this is a perception of the user and that is the root of the impact
stuartjohnson12 · 35m ago
Your socio-emotional capabilities are illusory. They are a product of how craving for social acceptance "hacks" your brain and exploits the hundreds of thousands of years of evolution of our equipment as a social species.
skeezyboy · 12m ago
its a next word predictor. if youve been convinced it has a brain, i have some magic beans youd be interested in
Interestingly, both technologies also supercharge scams - one by providing a way to cash out with minimal risk, the other by making convincing human interaction easier to fake.
For example, in one study, they divide participants into two groups, have one group watch https://www.youtube.com/watch?v=fn3KWM1kuAw (that highlights the high socio-emotional capabilities of a robot), while the other watches https://www.youtube.com/watch?v=tF4DML7FIWk (that highlights the low socio-emotional capabilities of a robot)
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Unsolicited advice for all: make an effort to hold onto your manners even with the robots or you'll quickly end up struggling to collaborate with anyone else.
It's sadly become quite common on internet forums to suppose that some post or comment was written by AI. It's probably true in some cases, but people should ask themselves how the cost/benefit to calling it out looks.
https://news.ycombinator.com/item?id=44912783
They actually described the methodology at least (note: I also haven’t fully read the paper yet, but I wanted to post in support of you not having a “take” yet, haha).
> socio-emotional capabilities of autonomous agents
The paper fails to note that these 'capabilities' are illusory. They are a product of how the behaviors of LLMs "hack" our brains and exploit the hundreds of thousands of years of evolution of our equipment as a social species. https://jenson.org/timmy/
That is the dehumanization process they are describing.
Your "timmy" post deserves its own discussion. Thanks for sharing it!