"Turns out Google made up an elaborate story about me"

48 jsheard 11 9/1/2025, 2:27:17 PM bsky.app ↗

Comments (11)

slightwinder · 1m ago
Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country). So maybe, this was enough for Google to hallucinate some connection. Highly concerning!

[0] https://www.youtube.com/watch?v=qgUzVZiint0

jsheard · 10m ago
Reading this I assumed it was down to the AI confusing two different Benn Jordans, but nope, the Newsmax guy is called Ryan McBeth. How does that even happen?
frozenlettuce · 4m ago
The model that google is using to handle requests in their search page is probably dumber than the other ones for cost savings. Not sure if this would be a smart move, as search with ads is their flagship product. It would be better having no ai in search at all.
meindnoch · 4m ago
It's not Google's fault. The 6pt text at the bottom clearly says:

"AI responses may include mistakes. Learn more"

gruez · 32s ago
>The 6pt text at the bottom clearly says:

I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.

deepvibrations · 12m ago
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
koolba · 6m ago
> The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.

What does it mean to “make and example”?

I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.

recursive · 4m ago
Weapons against misinformation are good weapons. Bring on the weaponization.
drivingmenuts · 8m ago
In an ideal world, a product that can be harmful is tested privately until there is a reasonable amount of safety in using that product. With AI, it seems like that protocol has been completely discarded in favor of smoke-testing it on the public and damn the consequences.

Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …

We are so screwed.

blibble · 14m ago
the "AI" bullshitters need to be liable for this type of wilful defamation

and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people

and if this makes "AI" nonviable as a business? tough shit

yapyap · 5m ago
Yikes, as expected people have started to take google AI summary as fact without doing any more research.

We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.

Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)