I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
geerlingguy · 4h ago
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
Aurornis · 4h ago
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
tavavex · 3h ago
> When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
Aurornis · 28m ago
They’ll get there. Tech people have been exposed to it longer. They’ve been around long enough to see people embarrassed by LLM hallucinations.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
simonw · 6m ago
If anything, I expect this to get worse.
The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
lawlessone · 3h ago
see it with comments here sometimes , "i asked chatgpt about Y" , really annoying, we all could have asked chatgpt, we didn't.
ljm · 36m ago
Have had some conversations where the other person goes into chatgpt to answer a question while I’m in the process of explaining a solution, and then says “GPT says this, look…”
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
bboygravity · 1h ago
Hi, I'm from 1 year in the future. None of what you typed applies anymore.
crashabr · 4m ago
I think you messed up something with your time-travelling setup. We're in the timeline where GPT5 did not become the all powerful sentient AI that Ai boosters promised us. Which timeline are you from?
leeoniya · 3h ago
> but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
We all think ourselves as understanding the tradeoffs of this tech and that we know how to use it responsibly. And we here may be right. But the typical person wants to do the least amount of effort and thinking possible. Our society will evolve to reflect this, it won't be great, and it will affect all of us no matter how personally responsible some of us remain.
iotku · 4h ago
I consider myself pretty technically literate, and not the worst at programming (though certainly far from the very best). Even so I can spend plenty of time arguing with LLMs which will give me plausible looking but extremely broken answers to some of the programming problems.
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
add-sub-mul-div · 3h ago
Even with code, "seeing" a block of code working isn't a guarantee there's not a subtle bug that will expose itself in a week, in a month, in a year under the right conditions.
giantrobot · 2h ago
I've pointed this out a lot and I often get replies along the lines of "people make mistakes too". While this is true, LLMs lack institutional memory leading to decisions. Even good reasoning models can't reliably tell you why they wrote some code they did when asked to review it. They can't even reliably run tests since they'll hardcode passing values for tests.
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
freeopinion · 4h ago
prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal
If you were considering purchasing a Biology text book, and spot read two chapters, what if you found the following?:
In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
freeopinion · 2h ago
Google distinguished itself early with techniques like PageRank that put more relevant content at the top of their search results.
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
eszed · 3h ago
I mean... Yes? That looks correct to me°, but it's been a minute since I worked with Temporal, so I'd run it myself and examine the output before I cut and paste.
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
nosianu · 3h ago
I would also read the documentation. In the given example, for example, you don't know if the desired fixed format "YYYY-MM-DD" might depend on some locale setting and only works because you happen to have the correct one in that test console.
freeopinion · 3h ago
Part of my point is this: If you have to read through the docs to see if the answer can be trusted, why didn't you just read the docs to begin with instead of asking the AI?
niccl · 29m ago
It's just dawned on me that one possible reason is that you don't know which docs to read. I've recently been forced into learning some JavaScript. Given the original question I wouldn't have known where to start. Now the AI has given me a bunch of things I can look at to see if it's the right thing
haswell · 1h ago
One of the arguments used to justify the mass-ingestion of copyrighted content to build these models is that the resulting model is a transformative work, and thus fair use.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
jaccola · 4h ago
This story will probably become big enough to drown out the fake video and the AI (which is presumably being fed top n search results) will automatically describe this fake video controversy instead...
No comments yet
ants_everywhere · 2h ago
Has anyone independently confirmed the accuracy of his claim?
reaperducer · 2h ago
I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
slightwinder · 5h ago
Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country). So maybe, this was enough for Google to hallucinate some connection. Highly concerning!
This is almost certainly what happened. Google's AI answers aren't magic -- they're just summarizing across searches. In this case, "Israel" + "Jordan" pulled back a video with opposite views than the author.
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
sigmoid10 · 2h ago
There is actually a musician called Benn Jordan who was impersonated by someone on twitter who posted pro-Israel content [1]. That content is no longer available, but it might have snuck into the training data, i.e. Benn Jordan = pro Israel. This might also have been set in relation to the other Jordan's previous pro-Palestine comments, eventually misattributing the "I was wrong about Israel" video. It's still a clear fuckup - but I could see humans doing something similar when sloppily accruing information.
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
sigmoid10 · 1h ago
That makes it even more believable that an LLM screwed up. I mean what are you supposed to believe at this point?
ants_everywhere · 1h ago
I guess so.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
ludicrousdispla · 3h ago
Interesting, I wonder what Google AI has to say about Stove Top Stuffing given it's association with Turkey.
underdeserver · 3h ago
Ironic, that Google enshittifying their search results is hurting what they hope is their next cash cow, AI.
gumby271 · 3h ago
I honestly don't know if people even care that the search result summaries are completely wrong the majority of the time. Most people I know see an answer given by Google and just believe it. To them that's the value, the accuracy doesn't really matter. I hope it ends up killing Google, but for the majority the shitty summary has replaced even shittier search results. On the surface it's a huge improvement, even if it's just distilled garbage.
larodi · 1h ago
there was a joke like 15 years ago
in googlis non est, ergo non est
which sums very well how people are super biased to believe the search results.
glenstein · 5h ago
That raises a fascinating point, which is whether search results that default to general topics ever are the basis for LLM training or information retrieval as a general phenomenon.
slightwinder · 4h ago
Yes, any human will most likely recognize the result as random noise, as they will know whom they are searching for, and see this not a video from or about Benn. But AI, taking all results as valid, will obviously struggle with this, condensing it to bullshit.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
LorenPechtel · 3h ago
The fundamental problem is AI has no ability to recognize data quality. You'll get something like the best answer to the question but with no regard for the quality of that answer. Humans generally recognize they're looking at red herrings, AIs don't.
reactordev · 4h ago
I think the answer is clear
bdhcuidbebe · 54m ago
Just wait until you realize how ai translation ”works”.
Its literally bending languages into american with other words.
nerevarthelame · 2h ago
Most people Google things they're unfamiliar with, and whatever the AI Overview generates will seem reasonable to someone who doesn't know better. But they are wrong a lot.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
sigmoid10 · 2h ago
I found it is very accurate for legacy static-web content. E.g. if you ask something that could easily be answered by looking at wikipedia or which has been answered in blogs, it will usually be right.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
retsibsi · 2h ago
I've found that the AI Overview is more accurate than it used to be... which makes it much worse in practice. It used to be wrong often enough, and obviously enough, that it was easy to ignore. Now it's often right and usually plausible, which makes it very tempting to rely on.
chao- · 42m ago
Here's my paraphrase of best description of this kind of "seems reasonable" AI misinformation. I wish I could credit where I first heard it:
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
MobiusHorizons · 4h ago
From the ai hallucinations
> Video and trip to Israel
On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel:
What I Learned on the Ground, which detailed his recent trip to Israel.
Reading this I assumed it was down to the AI confusing two different Benn Jordans, but nope, the guy who actually published that video is called Ryan McBeth. How does that even happen?
frozenlettuce · 5h ago
The model that google is using to handle requests in their search page is probably dumber than the other ones for cost savings. Not sure if this would be a smart move, as search with ads is their flagship product. It would be better having no ai in search at all.
lioeters · 5h ago
> better having no ai in search
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
hattmall · 4h ago
Bad information is inherently better for Google than correct information. If you get the correct information you only do one search. If you get bad, or misleading information that requires you to perform more searches that it is definitely better for Google.
delichon · 4h ago
This is a variation of the parable of the broken window. It is addressed in what may be the most influential essay in modern economics, "That Which is Seen, and That Which is Not Seen."
What's your point here? That Google wouldn't do this because "the broken window fallacy is a fallacy"?
We have them on the record in multiple lawsuits stating that they did exactly this.
lioeters · 3h ago
That makes sense, poor search results lead to more engagement. That is devious.. Now that you've pointed it out I can't unsee it.
maltelandwehr · 3h ago
In same cases, lower quality of search results leads to more ad clicks and thus more revenue.
chabes · 4h ago
Perverse incentive structure
gumby271 · 2h ago
I don't think most people care if the information is true; they just want an answer. Google destroyed the value of search by encouraging and promoting SEO blog spam, the horrible ai summary that confidently tells you some lie can now be sold as an improvement over the awful thing they were selling, and the majority will eat it up. I have to assume the ad portion of the business will be folded into the AI results at some point. The results already suck, making them sponsored won't push people any further away.
jug · 2h ago
I've also thought about this. It has to be a terrible AI to scale like this and provide these instantaneous answers. And probably heavy caching too.
itronitron · 1h ago
if it's dumber than the other ones then it must be really fucking stupid
Handprint4469 · 5h ago
> as search with ads is their flagship product.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
nenxk · 1h ago
Apparently Ryan’s video appears first above Benn’s on YouTube when searched and the ai just takes the yt search results as fact without checking the channel names
bombcar · 4h ago
The video likely mentioned Jordan it’s a country near Israel. It’s likely to be mentioned and there you go linked.
meindnoch · 5h ago
It's not Google's fault. The 6pt text at the bottom clearly says:
"AI responses may include mistakes. Learn more"
blibble · 5h ago
it IS google's fault, because they have created and are directly publishing defamatory content
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
mintplant · 5h ago
I believe 'meindnoch was being sarcastic.
markburns · 4h ago
I'd love to know why this happens so much. There's enough people in both groups that do spot it and don't spot it. I don't think I've ever felt the need for a sarcasm marker when I've seen one. Yet without it, it seems there will always be people taking things literally.
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
jjj123 · 4h ago
The /s thing is the most surefire way to make whatever joke you’re making not funny at all, so I say go ahead and be sarcastic even if not everyone gets it.
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
delecti · 3h ago
A lot of us are also autistic, and I suspect there's a sizable overlap with the people who like to be right. Though as someone in that overlap, it's less "I want to be the one who brings correctness" and more "I want discussions to only contain accurate facts".
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
xdfgh1112 · 3h ago
HN has plenty of neurodivergent people and not picking up on sarcasm (especially without any voice data) is an autistic trait.
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
mindslight · 3h ago
Part of the problem is that sarcasm relies heavily on shared group values (common wisdom), to make it clear that a given statement is meant in the opposite sense. Our shared group values have been fragmented pretty hard (eg half the country has thrown away conservative American values in favor of open strong-man fascism). The icing on top is the tech-contrarianism that rejects common wisdom in favor of looking for an edge. It was innovative when done from the bottom up in a subculture, but it lands somewhere between tedious and horrific now that tech has taken over mainstream society.
gruez · 5h ago
>it IS google's fault, because they have created and are directly publishing defamatory content
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
margalabargala · 4h ago
That would depend on whether the snippet was presented as "this is a view of the other website" vs "this is some information"
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
gruez · 4h ago
>That would depend on whether the snippet was presented as "this is a view of the other website" vs "this is some information"
So all google had to do was reword their disclaimer differently?
margalabargala · 4h ago
Stop strawmanning.
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
rectang · 4h ago
Exactly. This is the consequence when search engines cut out all the sites they used to send traffic to and instead present AI summaries as their own seemingly-authoritative content in order to keep the user from leaving. If you provide material in a way that your users trust, then you have to back it up. The alternative is to make sure that your users don’t trust it (and thus are disinclined to use it).
gruez · 4h ago
>Stop strawmanning.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
bluGill · 1h ago
No disclaimer is allowed. They can link to the misleading/wrong site only in the case that it is obviously a link.
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
8note · 4h ago
the snippet should be written differently.
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
margalabargala · 4h ago
> Nice try, but asking a question confirming your opponent's position isn't a strawman.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
atq2119 · 5h ago
Yes, they should also be held liable, but clearly the case of AI is worse.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
summermusic · 4h ago
That hypothetical scenario does not matter, it is a distraction from the real issue which is that Google’s tool produces defamatory text that is unsubstantiated by any material online.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
simmerup · 5h ago
No, but the person Googles linking to should be held liable.
anonymars · 4h ago
Isn't the whole point that there was no source being linked to, because the AI made it up?
simmerup · 4h ago
Sorry, I was repsonding to this:
> Suppose AI wasn't in the picture, and google was only returning a snippet of the top result
anonymars · 4h ago
Got it, and agreed it's a very different scenario
gruez · 4h ago
Why does google get off the hook in that case? I'd still be quite upset if it wasn't in the AI box, and even before the AI box there's plenty of people who take the snippets at face value.
simmerup · 4h ago
In my mind the Google result page is like a public space.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
gruez · 4h ago
>But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
simmerup · 4h ago
> Wouldn't this basically make any sort of AI as a service untennable
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E:
> If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press.
If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
aDyslecticCrow · 4h ago
The article author could be sued. Gemeni cannot be.
haswell · 4h ago
Why would we suppose AI isn’t in the picture? You’re describing unrelated scenarios. Apples and oranges. You can’t wish away the AI and then conclude what’s happening is acceptable because of how something entirely unrelated has been treated in the past.
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
gruez · 5h ago
>The 6pt text at the bottom clearly says:
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
margalabargala · 5h ago
You are right. It is okay to do whatever you want, as long as there is a sign stating it might happen.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
const_cast · 3h ago
Fun fact along this line of reasoning: all those dump trucks with the "not responsible for broken windshields" stickers? Yes, yes they are responsible.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
financetechbro · 5h ago
These are great life hacks! Thanks for sharing
gruez · 5h ago
>You are right. It is okay to do whatever you want, as long as there is a sign stating it might happen.
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
darkwater · 5h ago
So why GP reasoning doesn't apply to Google AI snippets and you consider it a straw-man?
Classic search results are clearly not Google's, they just match (or not) with your search query, then you go there and read them (and trust them or not depending on your own criteria or absence of). But a text, generated by Google, put as the first paragraph of text under your search, answering in plain English to a specific question you just asked, what should a disclaimer like that supposed to be? A "read it but discard it because it could be factually wrong"? Why are they showing it topmost?
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
gruez · 4h ago
>Classic search results are clearly not Google's, they just match (or not) with your search query, then you go there and read them (and trust them or not depending on your own criteria or absence of).
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
snypher · 5h ago
Where do you draw the line then? I doubt the AI is assessing the risk of 'what happens if I fuck this up', so perhaps the feature should be removed?
No comments yet
margalabargala · 5h ago
Reading your comments in context of the thread you're on, you think the disclaimer is sufficient to do things up to and including falsely claiming public figures have opposite views to their true ones on the Israel-Gaza conflict.
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
SpaceNugget · 3h ago
Since you are clearly an AI enjoyer I asked my local LLM to summarize your feelings for me. It said:
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
ctas · 2h ago
I've shared this example in another thread, but it fits here too. Few weeks ago, I talked to a small business owner who found out that Google's AI is telling users his company is a scam, based on totally unrelated information where a different, similarly named brand is mentioned.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
binarymax · 3h ago
I approach this from a technical perspective, and have research that shows how Google is unfit for summaries based on their short snippet length in their results [1].
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
What makes you think the ai overview summary is based on the snippets? That isn't my experience at all.
deepvibrations · 5h ago
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
GuB-42 · 4h ago
On what grounds?
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
delecti · 3h ago
Defamation does not have to be intentional, it can also be a statement made with reckless disregard for whether it's true or not. That's a pretty solid description of LLM hallucinations.
Sophira · 3h ago
> it looks like Gemini already picked up the story and corrected itself.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
> It could be considered defamation, but defamation is usually required to be intentional
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
Retr0id · 3h ago
Google's disclaimers clearly aren't cutting it, and "correcting" it isn't really possible if it's a dynamic response to each query.
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
GuB-42 · 2h ago
Correction doesn't seem like an impossible task to me.
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
larodi · 1h ago
Of course it must be RAG of some sort, this is a super low-lying fruit to grab onto. But then it is perhaps not so easy, and it is not a silver bullet to kill off competition such as Perplexity, which, honestly, handles this whole summary-search business much better.
jedimastert · 2h ago
> If hallucinations were made illegal, you might as well make LLMs illegal
No, the ask here is that companies be liable for the harm that their services bring
eth0up · 3h ago
"if hallucinations were made illegal..."
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
koolba · 5h ago
> The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
aDyslecticCrow · 4h ago
If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
throwawaymaths · 4h ago
> If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
poulpy123 · 5h ago
I don't like a litigious society, and I don't know if the case here would be enough to activate my threshold, but companies are responsible for the AI they provide, and should not be able to hide behind "the algorithm" when there are issues
Cthulhu_ · 5h ago
> The types of legal action that stops this in the future would immediately be weaponized.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
recursive · 5h ago
Weapons against misinformation are good weapons. Bring on the weaponization.
Newlaptop · 5h ago
The weapons will be used by the people in power.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
tehwebguy · 4h ago
They already do and they don’t even have to be powerful.
A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.
(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
delusional · 5h ago
Two counterpoints:
What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.
Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.
Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.
gregates · 5h ago
There are already laws against libel and slander. And yes, people like Trump and Musk routinely try to abuse them. They are often unsuccessful. The existence of the laws does not seem to be the relevant factor in whether these attempts to abuse the system succeed.
gruez · 5h ago
>Weapons against misinformation are good weapons
It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".
sssilver · 2h ago
Why must humans be responsible in court for the biological neural networks they possess and operate but corporations should not be responsible for the software neural networks they possess and operate?
nyc_pizzadev · 3h ago
Google has been a hot mess for me lately. Ya, the AI is awful, numerous times I’m shown information that’s either inaccurate or straight false. It will summarize my emails wrong, it will mess up easy facts like what time my dinner reservation is. Worst is the overall search UX, especially auto complete. Suggestions are never right and then trying to tap and navigate thru always leads to an mis-click.
tavavex · 4h ago
The year is 2032. One of the big tech giants has introduced Employ AI, the premiere AI tool for combating fraud and helping recruiters sift through thousands of job applications. It is now used in over 70% of HR departments, for nearly all salaried positions, from senior developers to minimum wage workers.
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
dakial1 · 4h ago
Well, there is always the option to create regulation where if the employer uses AI to summarize outputs, it must share with you the full content of the report before the recruiter sees it so that you can point out any inconsistency/error or better explain some passages.
insane_dreamer · 4h ago
Unlikely to ever happen. Employers don't even respond to rejected applicants most of the time much less tell them why they were rejected.
malfist · 3h ago
This is exactly how credit reports work
chabes · 4h ago
They didn’t even share names in the case of the OP
FourteenthTime · 4h ago
This is the most likely scenario. Half-baked AIs used by all tech giants and tech subgiants will make a mess of our identities. There needs to be a way for us to review and approve information about ourselves that goes into permanent records. In the 80's you send a resume to a company, they keep it on file. It might be BS but I attested to it. Maybe... ugh I can't believe I"m saying this: blockchain?
xattt · 4h ago
I roll 12. Employ AI shudders as my words echo through its memory banks: “Record flagged as disputed”. A faint glow surrounds my Employ ID profile. It is not complete absolution, but a path forward. The world may still mistrust me, but the truth can still be reclaimed.
tantalor · 4h ago
What's to stop you from running the same check on yourself, so you can see what the employers are seeing?
If anything this scenario makes the hiring process more transparent.
tavavex · 4h ago
You only have access to the applicant-facing side of the software, one that will dispense you an Employ ID, an application template, and will enable you to track the status of your application. To prevent people from abusing the system and finding workarounds, employers need to apply to be given an employer license that lets them use all the convenient filtering tools. Most tech companies have already bought one, as did all the large companies. Places like individual McDonald's franchises use their greater company's license. It's not a completely watertight system, but monitoring is just stringent enough to make your detailed application info inaccessible for nearly everyone. Maybe if you have the right credentials, or if you manage to fool the megacorp into believing that you're an actual employer, it's possible.
const_cast · 3h ago
Why would you have access to the software?
Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?
fmbb · 4h ago
Wrong question. What would enable you to run the same check?
tgv · 3h ago
Even if you could, how could you possibly correct the process? In the USA, it would probably take many years, possibly all the way to the Supreme Court, and the big bucks win anyway.
AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.
Schiendelman · 4h ago
The FCRA would likely already require that you can receive a copy of the check.
dexterdog · 4h ago
Paying the company that sells the service of checking for you.
bell-cot · 4h ago
IANAL...but at scale, this might make some libel lawyers rather rich.
timeinput · 4h ago
I think the emphasis should probably be on the might. If Employ AI (in my head cannon a wholly owned subsidiary of Google, Facebook, or Palantir), decides to use their free legal billable hours (because the lawyers are on staff anyway), unless you get to the level of class action you don't have a prayer of coming out on top.
bluGill · 1h ago
Legal fees are commonly part of a law suit. The courts don't like it when you waste time. Good lawyers know how to get their money.
const_cast · 3h ago
This is why you just don't tell people about the libel.
Companies already, today, never give you even an inkling of the reason why they didn't hire you.
bell-cot · 2h ago
"Don't tell the victim" doesn't actually scale up to "victims never find out".
buyucu · 4h ago
I think 2032 is unrealistic. I expect this to happen 2027 latest.
I am very curious if California's consumer rights to data deletion and correction are going to apply to the LLM model providers.
devinprater · 1h ago
I asked Meta Raybans about me, and they said I died last September.
layer8 · 38m ago
Are you familiar with the movie The Sixth Sense?
jug · 2h ago
You should be able to sue Google for libel for this and disclaimers on AI accuracy in their fine print should not matter. It's obvious that too many people don't care about these to make these rumors reach critical mass and become self sustaining.
yearesadpeople · 4h ago
I adore Benn Jordan, a refreshing voice in music and tech. I hope Google pay him a public apology. Ultimately, this is exactly how innocent, private people will have their reputations and lives wrecked by unregulated public-facing LLM text generation
lupusreal · 4h ago
Ryan McBeth glows so bright, his videos should only be viewed with the aid of a welding mask. His entire online presence seems seems to circle the theme of promoting military enlistment, tacitly when not explicitly.
Very bizarre that Benn Jordan somehow got roped into it.
nosmokewhereiam · 5h ago
I wish I could build the speech jammer, his coolest project. I also am an adult and understand why I can't have one.
rakoo · 4h ago
Turns out AI isn't based on truth
theandrewbailey · 4h ago
The intelligence isn't artificial: it's absent.
antonvs · 4h ago
The problem with that is it’s not true. Functionally these models are highly intelligent, surpassing a majority of humans in many respects. Coding tasks would be a good example. Underestimating them is a mistake.
amdivia · 3h ago
Both of you are correct, as different definitions of intelligence are being used here
miltonlost · 3h ago
Highly intelligent people often tell high school students the best ways to kill themselves and keep the attempts from their parents?
peterkelly · 4h ago
There was a post on HN the other day where someone was launching an email assistant that used AI to summarise emails that you received. The idea didn't excite me, it scared me.
I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.
Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.
yapyap · 5h ago
Yikes, as expected people have started to take google AI summary as fact without doing any more research.
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)
blibble · 5h ago
the "AI" bullshitters need to be liable for this type of wilful defamation
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
No comments yet
drivingmenuts · 5h ago
In an ideal world, a product that can be harmful is tested privately until there is a reasonable amount of safety in using that product. With AI, it seems like that protocol has been completely discarded in favor of smoke-testing it on the public and damn the consequences.
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
aejtaetj · 5h ago
Every underdog will skimp on safety, especially those in other jurisdictions.
bsenftner · 4h ago
The weaponization of "AI mistakes" - oops, don't take that seriously, everyone knows AI makes mistakes. Okay, yeah, it's a 24 pt headline with incorrect information, it's okay because it's AI.
Integrity is dead. Reliable journalism is dead.
4ndrewl · 1h ago
Your daily reminder that AI hallucination is a feature, not a bug.
One has to wonder if one of the main innovations driving "AI" is the complete lack of accountability and even shame.
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
mvdtnz · 2h ago
This is fine though because if you expand the AI Overview and scroll to the end and put on your reading glasses there's a teeny tiny line of text that says "AI responses may include mistake". So billion dollar misinformation machines can say whatever they want about you.
It's fine, it's okay. It's not like these funky LLMs will be used in any critical capacity in our lives like deciding if we make it through the first step of a job application or if we're saying anything nefarious on our government monitored chats. Or approving novel pharmaceuticals, or deciding which grant proposals to accept or deciding which government workers aren't important and can be safely laid off! /s
ants_everywhere · 2h ago
Can anyone independently confirm this guy's story?
His posts are mostly political rage bait and he actively tries to data poison AI.
He also claims that Hitler compares favorably to Trump. Given his seeming desire to let us all know how much he dislikes Israel, that's a pretty... interesting... claim.
Just because he's an unreliable source doesn't mean his story is false. But it would be nice to have confirmation before taking it seriously.
zozbot234 · 5h ago
AI makes stuff up, film at 11. It's literally a language model. It's just guessing what word follows another in a text, that's all it does. How's this different from the earlier incidents where that same Google AI would suggest that you should put glue on your pizza or eat rocks as a tasty snack?
simmerup · 5h ago
Because google should be sued for libel when they make shit up about you
anonymars · 4h ago
What's your point? That it's okay? That it should be normalized?
zozbot234 · 4h ago
Maybe if it was normalized, people would no longer trust those "AI overviews" as anything other than silly entertainment.
anonymars · 4h ago
I understand what you're saying in principle, but empirically society doesn't seem to be able to do this now even excepting AI hallucinations. So in practical terms, given the society we do have, what to do?
geor9e · 4h ago
Can we stop conflating LLM models with the companies that created them? It's "…Gemini made up…". Do we not value accuracy? It'd be a whole different story if a human defamed you, rather than a token predictor.
const_cast · 3h ago
LLMs have no sovereignty, identity, or accountability - they are computer programs.
We do not blame computer programs when they have bugs or make mistakes - we blame the human being who made them.
This has always been the case since we have created anything, dating back even tens of thousands of years. You absolutely cannot just unilaterally decide to change that now based on a whim.
jjj123 · 4h ago
Why shouldn’t they be conflated? Google made the LLM, it is responsible for the LLMs output.
I mean, no, I don’t think some Google employee tuned the LLM to produce output like this, but it doesn’t matter. They are still responsible.
No comments yet
omer9 · 54m ago
This is Israeli tactics to improve their image.
It is called HASBARA.
=> Telling lies to improve their image. It is an official government agency.
jerf · 3h ago
GPT-4 is about 45 gigabytes. https://dumps.wikimedia.org/other/kiwix/zim/wikipedia/wikipe... , a recent dump of the English wikipedia, is over twice that, and that's just English. Plus AIs are expected to know about other languages, science, who even knows how much Reddit, etc.
There literally isn't room for them to know everything about everyone when they're just asked about random people without consulting sources, and even when consulting sources it's still pretty easy for them to come in with extremely wrong priors. The world is very large.
You have to be very careful about these "on the edge" sorts of queries, it's where the hallucination will be maximized.
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
No comments yet
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
[0] https://www.youtube.com/watch?v=qgUzVZiint0
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
in googlis non est, ergo non est
which sums very well how people are super biased to believe the search results.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
Its literally bending languages into american with other words.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
No comments yet
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
We have them on the record in multiple lawsuits stating that they did exactly this.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
"AI responses may include mistakes. Learn more"
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
So all google had to do was reword their disclaimer differently?
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
No comments yet
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
[1] https://maxirwin.com/articles/interleaving-rag/
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
No, the ask here is that companies be liable for the harm that their services bring
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.
(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.
Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.
Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.
It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
If anything this scenario makes the hiring process more transparent.
Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?
AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.
Companies already, today, never give you even an inkling of the reason why they didn't hire you.
Dave Barry is pretty much A-list famous.
Very bizarre that Benn Jordan somehow got roped into it.
I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.
Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
No comments yet
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
Integrity is dead. Reliable journalism is dead.
https://theconversation.com/why-microsofts-copilot-ai-falsel...
Definitely not the last.
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
https://link.springer.com/article/10.1007/s10676-024-09775-5
His posts are mostly political rage bait and he actively tries to data poison AI.
He also claims that Hitler compares favorably to Trump. Given his seeming desire to let us all know how much he dislikes Israel, that's a pretty... interesting... claim.
Just because he's an unreliable source doesn't mean his story is false. But it would be nice to have confirmation before taking it seriously.
We do not blame computer programs when they have bugs or make mistakes - we blame the human being who made them.
This has always been the case since we have created anything, dating back even tens of thousands of years. You absolutely cannot just unilaterally decide to change that now based on a whim.
I mean, no, I don’t think some Google employee tuned the LLM to produce output like this, but it doesn’t matter. They are still responsible.
No comments yet
It is called HASBARA. => Telling lies to improve their image. It is an official government agency.
There literally isn't room for them to know everything about everyone when they're just asked about random people without consulting sources, and even when consulting sources it's still pretty easy for them to come in with extremely wrong priors. The world is very large.
You have to be very careful about these "on the edge" sorts of queries, it's where the hallucination will be maximized.
Not sure where you’re getting the 45Gb number.
Also, Google doesn’t use GPT-4 for summaries. They use a custom version of their Gemini model family.
Gemma 3 270M was trained on 6 trillion tokens but can be loaded into a few hundred million bytes of memory.
But yeah GPT-4 is certainly way bigger than 45GB.