>In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
So the policy document literally contains this example? Why would they include such an insane example?
mathiaspoint · 40m ago
Clear examples can make communication easier. Being clinical and implicit can technically capture the entire space of ideas you want but if your goal is to prevent surprises (read lawsuits) then including an extreme example might be helpful.
myko · 40m ago
if it is anything like documentation i am reading these days it was generated by an LLM and not very well vetted
gs17 · 24m ago
I think it has to be, I can't see someone working for these companies writing "It is acceptable to create statements that demean people on the basis of their protected characteristics."
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
potato3732842 · 36m ago
>So the policy document literally contains this example? Why would they include such an insane example?
Because that's more or less what radiation therapy is and that's how you'd expect a doctor to try and communicate the subject to someone who has no understanding of what radiation is. There's no context provided on where that example came from so perhaps it was an example of acceptably dumbing things down.
That said, this whole thing stinks and there are better examples to illustrate it.
nabla9 · 12m ago
> “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.
dehrmann · 10m ago
LLMs gonna LLM, and guardrails are hard and unreliable.
ChrisArchitect · 14m ago
Related:
Meta's AI rules let bots hold sensual chats with kids, offer false medical info
So... is the solution to this having another AI chatbot watch the conversation and provide warnings / disclaimers about it?
jmkni · 43m ago
> Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they’d like – creating a huge potential market for Meta’s digital companions.
I hate everything about this sentence. This is literally the opposite of what people need.
dlivingston · 7m ago
> Facebook’s mission is to give people the power to build community and bring the world closer together.
That's from 2021 [0]. If you go to their mission statement today [1], it reads:
> Build the future of human connection and the technology that makes it possible.
Maybe I'm reading too much into this, but -- at a time when there is a loneliness epidemic, when people are more isolated and divided than ever, when people are segmented into their little bubbles (both online and IRL) -- Meta is not only abdicating their responsibility to help connect humanity, but actively making the problem worse.
Can't wait for the next podcast with Zucc where he gets asked about BJJ by some dimwit instead of this.
harmmonica · 15m ago
True "growth hacker" mindset. Our mission is to connect the people of the world. The TAM for that is ~8 billion. What if we could, overnight, increase the number of "people" in the world by orders of magnitude so that every one of those 8 billion people becomes connected to tens/hundreds/thousands of new connections without having to source new organic beings.
I'm not sure I'm being 100% sarcastic because in some ways it does solve a need people seem to have. Maybe 99% sarcasm and 1% praise.
JohnMakin · 24m ago
Even more ghoulish, Zuckerberg is smart and savvy enough (more than most CEO's who have drank their own kool-aid) to be aware of the part he's played in creating this current hellscape. Social media almost certainly has played a big part in creating the current loneliness problem.
Provide the drug, then provide a "cure" for the drug. Really, really gross.
12_throw_away · 14m ago
> he is smart and savvy enough (more than most CEO's who have drank their own kool-aid)
We're talking about Zuckerberg here? The one who spent how much, exactly, on the wet fart that was the "metaverse"? The one who spent how much, exactly, on running for president of the United States? He strikes me as the least savvy and most craven of our current class of tech oligarchs, which is no mean feat.
JKCalhoun · 24m ago
And yet apparently "Ani" is some kind of Grok fantasy girlfriend that I see people posting about all the time. It seems to be the way things are going?
2OEH8eoCRo0 · 19m ago
That his mind even goes there and sees opportunity disgusts me. I guess I don't have the stomach to be a billionaire.
No comments yet
GuinansEyebrows · 51m ago
> “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.”
acceptable to whom? who are the actual people who are responsible for this behavior?
add-sub-mul-div · 49m ago
> acceptable to whom?
Anyone who still has an account on any Meta property.
nerdjon · 22m ago
How we keep getting articles like this, that LLM's will flat out lie, and yet we keep pushing them and the general public keeps eating it up... is beyond me.
They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.
gmm1990 · 15m ago
How are there not agents that are "instruct trained" differently. Is this behavior in the fundamental model? From my limited knowledge I'd think it'd be more from those post model training steps, but there are so many people who don't like that I'd figure there be an interface that doesn't talk like that.
mdhb · 38m ago
Also metas chatbot: trying to roleplay sex with children and offer bad medical advice to cancer patients.
Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.
People who work at Meta should be treated accordingly.
josefritzishere · 29m ago
It is infuriating that this objectively terrible service is slated to replace competent workers. It's madness.
silisili · 19m ago
Yep, and with zero liability to boot! They can just say or do anything and apparently companies can just handwave it away with a laugh and a 'that silly goose LLM.'
A sick man died enroute to visit a chatbot which fed him a false address as its own. Meta needs to be held accountable.
We need better regulation around these chatbots.
johnwheeler · 41m ago
Imagine how many people this will happen to who won’t come forward because of embarrassment.
sxp · 54m ago
This seems unrelated to the chatbot aspect:
> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
...
> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.
edent · 46m ago
One day, not too long from now, you'll grow old. Your eyesight will fade, your back will hurt, and your brain will find it harder to process information.
Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?
mathiaspoint · 34m ago
I often say if I'm diagnosed with some serious cancer I'd probably try to sail the northwest passage rather than seeking treatment. I'm sure some people want absolute maximum raw time but plenty of us would prefer adventure right up to the end and I don't think denying us that is appropriate either.
zahlman · 26m ago
The point is that he could have just as easily suffered this injury in his home country going about day to day life, where his eyesight, balance etc. would have been just as bad. The causal link between the chatbot's flirting and his death is shaky at best. This was tragic, and also the result of something clearly unethical, but the death was still not a reasonably foreseeable consequence.
edent · 12m ago
He could have suffered this injury in day-to-day life but he didn't.
Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?
maxwell · 49m ago
Why was he rushing in the dark with a roller-bag suitcase to catch the train?
To meet someone he met online who claimed multiple times to be real.
browningstreet · 45m ago
Yeah.. my first instinct was to be more skeptical about the story I was reading, because I hate Meta and people can get in trouble all on their own. But I finished the whole story and between the blue check mark, the insistence that it's real, and the romantic/flirty escalations, I'm less enthusiastic that Meta is in the clear.
Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.
Ajedi32 · 26m ago
> AI needs to more hard-coded w/r/t honesty & clarity about what it is
That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?
The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?
browningstreet · 15m ago
> Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?
In video games? I'm having trouble taking this objection to my suggestion seriously.
gs17 · 5m ago
Really, your response should be that the video game use case is easier to detect going off track. It's a lot more feasible to detect when Random Peasant #2154 in Skyrim is breaking the fourth wall than a generic chatbot.
The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.
So the policy document literally contains this example? Why would they include such an insane example?
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
Because that's more or less what radiation therapy is and that's how you'd expect a doctor to try and communicate the subject to someone who has no understanding of what radiation is. There's no context provided on where that example came from so perhaps it was an example of acceptably dumbing things down.
That said, this whole thing stinks and there are better examples to illustrate it.
Meta's AI rules let bots hold sensual chats with kids, offer false medical info
https://news.ycombinator.com/item?id=44899674
I hate everything about this sentence. This is literally the opposite of what people need.
That's from 2021 [0]. If you go to their mission statement today [1], it reads:
> Build the future of human connection and the technology that makes it possible.
Maybe I'm reading too much into this, but -- at a time when there is a loneliness epidemic, when people are more isolated and divided than ever, when people are segmented into their little bubbles (both online and IRL) -- Meta is not only abdicating their responsibility to help connect humanity, but actively making the problem worse.
[0]: https://www.facebook.com/government-nonprofits/blog/connecti...
[1]: https://www.meta.com/about/company-info/
I'm not sure I'm being 100% sarcastic because in some ways it does solve a need people seem to have. Maybe 99% sarcasm and 1% praise.
Provide the drug, then provide a "cure" for the drug. Really, really gross.
We're talking about Zuckerberg here? The one who spent how much, exactly, on the wet fart that was the "metaverse"? The one who spent how much, exactly, on running for president of the United States? He strikes me as the least savvy and most craven of our current class of tech oligarchs, which is no mean feat.
No comments yet
acceptable to whom? who are the actual people who are responsible for this behavior?
Anyone who still has an account on any Meta property.
They even "lie" about their actions. My absolute favorite that I still see happen, is you ask one of these models to write a script. Something is wrong, so it says something along the lines of "let me just check the documentation real quick" proceeded by the next words a second later being something like "now I got it"... since you know... it didn't actually check anything but of course the predictive engine wants to "say" that.
Two examples that they explicitly wrote out in an internal document as things that are totally ok in their book.
People who work at Meta should be treated accordingly.
https://futurism.com/the-byte/car-dealership-ai
We need better regulation around these chatbots.
> And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.
...
> Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.
Do people like you deserve to be protected by society? If a predatory company tries to scam you, should we say "sxp was old; they had it coming!"?
Imagine you were hit by a self-driving vehicle which was deliberately designed to kill Canadaians. Do you take comfort from the fact that you could have quite easily been hit by a human driver who wasn't paying attention?
To meet someone he met online who claimed multiple times to be real.
Safety and guard rails may be an ongoing development in AI, but at the least, AI needs to more hard-coded w/r/t honesty & clarity about what it is.
That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real?
The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise?
In video games? I'm having trouble taking this objection to my suggestion seriously.
The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context.