Death by AI

208 ano-ther 69 7/19/2025, 2:35:11 PM davebarry.substack.com ↗

Comments (69)

ilaksh · 3m ago
That's obviously broken but part of this is an inherent difficulty with names. One thing they could do would be to have a default question that is always present like "what other people named [_____] are there?"

That wouldn't solve the problem of mixing up multiple people. But the first problem most people have is probably actually that it pulls up a person that is more famous than who they were actually looking for.

I think Google does have some type of knowledge graph. I wonder how much AI model uses it.

Maybe it hits the graph, but also some kind of Google search, and then the LLM is like Gemini Flash Lite and is not smart enough to realize which search result goes with the famous person from the graph versus just random info from search results.

I imagine for a lot of names, there are different levels of fame and especially in different categories.

It makes me realize that my knowledge graph application may eventually have an issue with using first and last name as entity IDs. Although it is supposed to be for just an individual's personal info so I can probably mostly get away with it. But I already see a different issue when analyzing emails where my different screen names are not easily recognized as being the same person.

abathur · 4h ago
A popular local spot has a summary on google maps that says:

Vibrant watering hole with drinks & po' boys, as well as a jukebox, pool & electronic darts.

It doesn't serve po' boys, have a jukebox (though the playlists are impeccable), have pool, or have electronic darts. (It also doesn't really have drinks in the way this implies. It's got beer and a few canned options. No cocktails or mixed drinks.)

They got a catty one-star review a month ago for having a misleading description by someone who really wanted to play pool or darts.

I'm sure the owner reported it. I reported it. I imagine other visitors have as well. At least a month on, it's still there.

FeteCommuniste · 4h ago
I really wish Google had some kind of global “I don’t want any identifiably AI-generated content hitting my retinas, ever” checkbox.

Too much to ask, surely.

derefr · 2h ago
That'd be a bit like expecting Five Guys to cook you something vegetarian. Google are an AI company at this point. If you don't want AI touching your "food", use a search engine not run by an AI company.
Dotnaught · 1h ago
You can append -ai to your searches to omit AI Overview replies. It's not enough but it's something.
daveguy · 55m ago
If they just put a checkbox by the search bar that keeps state, I wonder what percent would uncheck it.
benrapscallion · 44m ago
It’s called kagi.com
arrowsmith · 12m ago
Tangential but I just went to Kagi.com to check their pricing and I was astonished to see that:

- The "Monthly" option is selected by default.

- If you click "Yearly", it tells you the actual full yearly price without dividing it by 12.

That's so rare and refreshing that I'm tempted to sign up just out of respect.

Spivak · 3h ago
You hear a faint whisper from the alleyway: you should try Kagi.

I know it's the HN darling and is probably talked about too much already but it doesn't have this problem. The only AI stuff is if you specifically ask for it which in your case would be never. And unlike Google where you are at the whims of the algorithm you can punish (or just block) AI garbage sites that SEO their way into the organic results. And a global toggle to block AI images.

CamperBob2 · 1h ago
That's just Google Maps being Google Maps, as anyone who has used them since 2005 can tell you.

I can see a bright future in blaming things on AI that have nothing to do with AI, at least on here.

brookst · 53m ago
Well my dog died and that never happened before AI.
givemeethekeys · 3h ago
Can one sue for damages? Is it worth getting delisted?
jwr · 6h ago
I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it. Various "feedback" forms are mostly ignored.

I had to fight a similar battle with Google Maps, which most people believe to be a source of truth, and it took years until incorrect information was changed. I'm not even sure if it was because of all the feedback I provided.

I see Google as a firehose of information that they spit at me ("feed"), they are too big to be concerned about any inconsistencies, as these don't hurt their business model.

muglug · 6h ago
No, this is very much an AI overview thing. In the beginning Google put the most likely-to-match-your-query result at the top, and you could click the link to see whether it answered your question.

Now, frequently, the AI summaries are on top. The AI summary LLM is clearly a very fast, very dumb LLM that’s cheap enough to run on webpage text for every search result.

That was a product decision, and a very bad one. Currently a search for "Suicide Squad" yields

> The phrase "suide side squad" appears to be a misspelling of "Suicide Squad"

weatherlite · 19m ago
> That was a product decision, and a very bad one.

I don't know that it's a bad decision, time will judge it. Also, we can expect the quality of the results to improve over time. I think Google saw a real threat to their search business and had to respond.

cosmical65 · 1h ago
> I'd say this isn't just an AI overview thing. It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it.

Well, in this case the inaccurate information is shown because the AI overview is combining information about two different people, rather than the sources being wrong. With traditional search, any webpages would be talking about one of the two people and contain only information about them. Thus, I'd say that this problem is specific to the AI overview.

PontifexMinimus · 3h ago
> It's a Google thing. Google will sometimes show inaccurate information and there is usually no way to correct it.

Surely there is a way to correct it: getting the issue on the front page of HN.

hughw · 6h ago
Well it was accurate if you were asking about the Dave Barry in Dorchester.
omnicognate · 5h ago
He won a Pulitzer too? Small world.
o11c · 5h ago
I remember when the biggest gripe I had with Google was that when I searched for Java documentation (by class name), it defaulted to showing me the version for 1.4 instead of 6.
sroussey · 2h ago
Same problem with LLMs particularly if a new version released in the last year.
kjkjadksj · 3h ago
Google maps is so bad with its auto content. Ultra private country club? Lets mark the cartpaths as full bike paths. Cemetery? Also bike paths. Random spit of sidewalk and grass between an office building and its parking lot? Believe it or not also bike paths.
xp84 · 1h ago
I mean, that last one sounds functionally useful, since it would indeed be better to take the random concrete paths inside an office property (that wasn’t a closed campus) than to ride on the expressway that fronts it, if the “paths” are going where you’re going.
sethherr · 2h ago
Biking is great tho
jh00ker · 6h ago
I'm interested how the answer will change once his article gets indexed. "Dave Barry died in 2016, but he continues to dispute this fact to this day."
jedimastert · 2h ago
I just saw recently a band called Dutch Interior had Meta AI hallucinate just straight up slander about how their band is linked to White supremacists and far right extremists

https://youtube.com/shorts/eT96FbU_a9E?si=johS04spdVBYqyg3

zaptrem · 5h ago
A few versions of that overview were not incorrect, there actually was another Dave Barry who did die at the time mentioned. Why does this Dave Barry believe he has more of a right to be the one pointed to for the query "What happened to him" when nothing has happened to him but something most certainly did happen to the other Dave Barry (death)?
masswerk · 4h ago
The problem being, if this is listed among other details and links regarding the Bostonian Dave Batty, there's a clear and unambiguous context established. So it is wrong.

The versions with "Dave Barry, the humorist and Pulitzer Price winner, passed away last November 20…" and "Dave Barry, a Bostonian … died on November 20th…" are also rather unambiguous regarding who this might be about. The point being, even if the meaning of the particular identity of the subject is moved outside to an embedding context, it is still crucial for the meaning of these utterances.

exitb · 4h ago
When you google his name, the summaries are part of top section that’s clearly pointing to Dave Barry, the autor. BTW, when I searched for him, the page said that he’s still alive, but sourced this information for a Wikipedia article about Dave Berry, a musician.
alexmorley · 5h ago
Even those versions could well have been interleaved with other AI summaries about Dave Barry that referred to OP without disambiguating which was about who.

Be ideal if it did disambiguate a la Wikipedia.

dingnuts · 4h ago
Because the details about the activist Dave Barry appeared in a subsection about comedian Dave Barry with the title "What happened to Dave Barry," that's why. Any human encountering the information would have been in the context of the comedian, which the model forgot, in a subsection.

That's why this Dave Barry has a right. It's a subsection.

It'd be like opening Dave Barry (comedian) on Wikipedia and halfway through the article in a subsection it starts detailing the death of a different Dave Barry.

ChrisMarshallNY · 5h ago
Dave Barry is the best!

That is such a classic problem with Google (from long before AI).

I am not optimistic about anything being changed from this, but hope springs eternal.

Also, I think the trilobite is cute. I have a [real fossilized] one on my desk. My friend stuck a pair of glasses on it, because I'm an old dinosaur, but he wanted to go back even further.

throwup238 · 1h ago
You may enjoy this wonderful site: https://www.trilobites.info/
ChrisMarshallNY · 1h ago
Cool!

The site structure is also fairly prehistoric!

yalogin · 2h ago
I had a similar experience with meta’s AI. Through their WhatsApp interface I tried for about an hour to get a picture generated. It kept stating everything I asked for correctly but then it never arrived at the picture, actually stayed far from what I asked for and at best getting 70%. This and many other interactions with many LLMs made me realize one thing - once the llm starts hallucinating it’s really tough to steer it away from it. There is no fixing it.

I don’t know if this is a fundamental problem with the llm architecture or a problem with proper prompts.

ciconia · 32m ago
> It was like trying to communicate with a toaster.

Yes, that's exactly what AI is.

_ache_ · 6h ago
Can you please re-consult a physician? I just check on ChatGPT, I'm pretty confident you are dead.
devinplatt · 6h ago
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
eloeffler · 6h ago
I know one story that may have become such an experience. It's about Wikipedia Germany and I don't know what the policies there actually are.

A German 90s/2000s rapper (Textor, MC of Kinderzimmer Productions) produced a radio feature about facts and how hard it can be to prove them.

One personal example he added was about his Wikipedia Article that stated that his mother used to be a famous jazz singer in her birth country Sweden. Except she never was. The story had been added to an Album recension in a rap magazine years before the article was written. Textor explains that this is part of 'realness' in rap, which has little to do with facts and more with attitude.

When they approached Wikipedia Germany, it was very difficult to change this 'fact' about the biography of his mother. There was published information about her in a newspaper and she could not immediately prove who she was. Unfortunately, Textor didn't finish the story and moved on to the next topic in the radio feature.

pyman · 6h ago
I'm worried about this. Companies like Wikipedia spent years trying to get things right, and now suddenly Google and Microsoft (including OpenAI) are using GenAI to generate content that, frankly, can't be trusted because it's often made up.

That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.

This needs to be regulated. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.

weatherlite · 14m ago
> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Did they ? Lots of people, and some research verify this, think it has a major left leaning bias, so while usually not making up any facts editors still cherry pick whatever facts fit the narrative and leave all else aside.

Timwi · 2h ago
Wikipedia is not a company, it's a website.

The organization that runs the website, the Wikimedia Foundation, is also not a company. It's a nonprofit.

And the Wikimedia Foundation have not “spent years trying to get things right”, assuming you're referring to facts posted on Wikipedia. That was in fact a bunch of unpaid volunteer contributors, many of whom anonymous and almost all of whom unaffiliated with the Wikimedia Foundation.

Aurornis · 5h ago
> This needs to be regulated. They should be held accountable for spreading false information or rumours,

Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

> since it can have unexpected consequences

Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.

I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.

> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?

EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.

pyman · 5h ago
If Google accidentally generates an article claiming a politician in XYZ country is corrupt the day before an election, then quietly corrects it after the election, should we NOT hold them accountable?

Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?

And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.

[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal

jdietrich · 4h ago
Volkswagen intentionally and persistently lied to regulators. In this instance, Google confused one Dave Barry with another Dave Barry. While it is illegal to intentionally deceive for material gain, it is not generally illegal to merely be wrong.
pyman · 4h ago
This is exactly why we need to regulate Big Tech. Right now, they're saying: "It wasn't us, it was our AI's fault."

But how do we know they're telling the truth? How do we know it wasn't intentional? And more importantly, who's held accountable?

While Google's AI made the mistake, Google deployed it, branded it, and controls it. If this kind of error causes harm (like defamation, reputational damage, or interference in public opinion), intent doesn't necessarily matter in terms of accountability.

So while it's not illegal to be wrong, the scale and influence of Big Tech means they can't hide behind "it was the AI, not us."

No comments yet

blibble · 4h ago
> If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

sounds good to me?

pyman · 4h ago
+1

Fines, when backed by strong regulation, can lead to more control and better quality information, but only if companies are actually held to account.

h2zizzle · 1h ago
Grew up reading Dave's columns, and managed to get ahold of a copy of Big Trouble when I was in the 5th grade. I was probably too young to be reading about chickens being rubbed against women's bare chests and "sex pootie" (whatever that is), but the way we were being propagandized during the early Bush years, his was an extremely welcome voice of absurdity-tinged wisdom, alongside Aaron McGruder's and Gene Weingarten's. Very happy to see his name pop up and that he hasn't missed a beat. And that he's not dead. /Denzel

I also hope that the AI and Google duders understand that this is most people's experience with their products these days. They don't work, and they twist reality in ways that older methods didn't (couldn't, because of the procedural guardrails and direct human input and such). And no amount of spin is going to change this perception - of the stochastic parrots being fundamentally flawed - until they're... you know... not. The sentiment management campaigns aren't that strong just yet.

foobarbecue · 55m ago
"for now we probably should use it only for tasks where facts are not important, such as writing letters of recommendation and formulating government policy."
ChrisMarshallNY · 5h ago
This brings this classic to mind: https://www.youtube.com/watch?v=W4rR-OsTNCg
rf15 · 7h ago
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
weatherlite · 11m ago
> Are we getting close to our very own Stop the Slop campaign?

I don't think so. We read about the handful of failures while there are billions of successful queries every day, in fact I think AI Overviews is sticky and here to stay.

randcraw · 7h ago
Yeah, after daily working with AI for a decade in a domain where it _does_ work predictably and reliably (image analysis), I continue to be amazed how many of us continue to trust LLM-based text output as being useful. If any human source got their facts wrong this often, we'd surely dismiss them as a counterproductive imbecile.

Or elect them President.

BobbyTables2 · 7h ago
HAL 9000 in 2028!
locallost · 7h ago
I am beginning to wonder why I use it, but the idea of it is so tempting. Try to google it and get stuck because it's difficult to find, or ask and get an instant response. It's not hard to guess which one is more inviting, but it ends up being a huge time sink anyway.
trod1234 · 7h ago
Regulation with active enforcement is the only civil way.

The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.

Aurornis · 6h ago
> Regulation with active enforcement is the only civil way.

What regulation? What enforcement?

These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.

Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.

ViscountPenguin · 2h ago
A very simple example would be a mandatory mechanism for correcting mistakes in prebaked LLM outputs, and an ability to opt out of things like Gemini AI Overview on pages about you. Regulation isn't all or nothing, viewing it like that is reductive.
trod1234 · 5h ago
Regulations exist to override profit motive when corporations are unable to police themselves.

Enforcement ensures accountability.

Fines don't do much in a fiat money-printing environment.

Enforcement is accountability, the kind that stakeholders pay attention to.

Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.

> Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.

This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.

AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.

Its quite reasonable that the needs for national security trump private business making profit in a destructive way.

Ukv · 3h ago
> Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal

Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions? If not, I feel it's kind of arbitrarily deterring certain approaches potentially at the cost of safety ("sure this CNN blows traditional methods out of the water in terms of accuracy, but the legal risk isn't worth it").

In most cases I think it'd make more sense to have fines and incentives for above-average and below-average incident rates (and liability for negligence in the worse cases), then let methods win/fail on their own merit.

trod1234 · 1h ago
> Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

I would say yes because the person deciding must be the one making the entire decision but there are many examples where someone might be paid to just rubberstamp decisions already made. Letting the person who decided to implement the solution off scot-free.

The mere presence of AI (anything based on underlying work of perceptrons) being used accompanied by a loss should prompt a thorough review which corporations currently are incapable of performing for themselves due to lack of consequences/accountability. Lack of disclosure, and the limits of current standing, is another issue that really requires this approach.

The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts. Its also far easier to employ corruption to work around the fine later for these entities as market leaders. We've seen this a number of times in various markets/sectors like JPM and the 10+ year silver price fixing scandal.

Merit of subjective rates isn't something that can be enforced, because it is so easily manipulated. Gross negligence already exists and occurs frighteningly common but never makes it to court because proof often requires showing standing to get discovery which isn't generally granted absent a smoking gun or the whim of a judge.

Bad things happen certainly where no one is at fault, but most business structure today is given far too much lee-way and have promoted the 3Ds. Its all about: deny, defend, depose.

Ukv · 5m ago
> The mere presence of AI (anything based on underlying work of perceptrons) [...]

Why single out based on underlying technology? If for instance we're choosing a tumor detector, I feel what's relevant is "Method A has been tested to achieve 95% AUROC, method B has been tested to achieve 90% AUROC" - there shouldn't be an extra burden in the way of choosing method A.

And it may well be that the perceptron-based method is the one with lower AUROC - just that it should then be discouraged because it's worse than the other methods, not because a special case puts it at a unique legal disadvantage even when safer.

> The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts.

Large enough fines/rewards should provide large enough incentive (and there would still be liability for criminal negligence where there is criminal negligence). Those government contracts can also be conditioned on meeting certain safety standards.

> Merit of subjective rates isn't something that can be enforced

We can/do measure things like incident rates, and have government agencies that perform/require safety testing and can block products from market. Not always perfect, but certainly seems better than the company just picking a scape-goat (and only if they used a certain technology).

jongjong · 5h ago
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.

It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?

herval · 5h ago
I'm not sure that's the case, and it's quite easily proven - if you ask an LLM any question, then doubt their response, they'll change their minds and offer a different interpretation. It's an indication they hold multiple interpretations, depending on how you ask, otherwise they'd dig in.

You can also see decision paralysis in action if you implement CoT - it's common to see the model "pondering" about a bunch of possible options before picking one.

tinyhouse · 3h ago
This is the funniest thing I read this week. Lol.
draw_down · 7h ago
Man, this guy is still doing it. Good for him! I used to read his books (compendia of his syndicated column) when I was a kid.
SoftTalker · 6h ago
Dave Barry is dead? I didn't even know he was sick.
hibert · 7h ago
Leave it to a journalist to play chicken with one of the most powerful minds in the world on principle.

Personally, if I got a resurrection from it, I would accept the nudge and do the political activism in Dorchester.