The overviews are also wrong and difficult to get fixed.
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
We have to track each and every one of these problems down, investigate and evaluate whether we can reproduce them, give them a "thumbs down" to then be able to submit "feedback", with no assurance it will be fixed in a timely manner and no obvious way to opt ourselves out of it entirely. For something beyond our consent and control.
It's worse than when Google and Yelp would create unofficial business profiles on your behalf and then held them hostage until you registered with their services to change them.
brianwawok · 3h ago
Fun. I have people asking ChatGPT support question about my SaaS app, getting made up answers, and then cancelling because we can’t do something that we can. Can’t make this crap up. How do I teach Chat GPT every feature of a random SaaS app?
esafak · 1h ago
Write documentation and don't block crawlers.
zdragnar · 3m ago
There's a library I use with extensive documentation- every method, parameter, event, configuration option conceivable is documented.
Every so often I get lost in the docs trying to do something that actually isn't supported (the library has some glaring oversights) and I'll search on Google to see if anyone else came up with a similar problem and solution on a forum or something.
Instead of telling me "that isn't supported" the AI overview instead says "here's roughly how you would do it with libraries of this sort" and then it would provide a fictional code sample with actual method names from the documentation, except the comments say the method could do one thing, but when you check the documentation to be sure, it actually does something different.
It's a total crapshoot on any given search whether I'll be saving time or losing it using the AI overview, and I'm cynically assuming that we are entering a new round of the Dark Ages.
ceejayoz · 50m ago
It’ll still make shit up.
nomel · 7m ago
It'll need to make less up, so still worth it.
recursive · 2m ago
It doesn't need to make up any.
mysterydip · 2h ago
I was at an event where someone was arguing there wasn't an entry fee because chatgpt said it was free (with a screenshot of proof) then asked why they weren't honoring their online price.
amluto · 1h ago
I particularly hate when the AI overview is directly contradicted by the first few search results.
andrei_says_ · 26m ago
Their goal has always been to be the gatekeeper.
Nursie · 1h ago
I still find it amazing that the world's largest search engine, which so many use as an oracle, is so happy to put wrong information at the top of its page. My examples recently -
- Looking up a hint for the casino room in the game "Blue Prince", the AI summary gave me details of the card games on offer at the "Blue Prince Casino" in the next suburb over. There is no casino there.
- Looking up workers rights during a discussion of something to do with management, it directly contradicted the legislation and official government guidance.
I can't imagine how frustrating it must be for business-owners, or those providing information services to find that their traffic is intercepted and their potential visitors treated to an inaccurate version on the search page.
wat10000 · 5m ago
For years, a search for “is it safe to throw used car batteries into the ocean” would show an overview saying that not only is it safe, it’s beneficial to ocean life, so it’s a good thing to do.
At some point, an article about how Google was showing this crap made it to the top of the rankings and they started taking the overview from it rather than the original Quora answer it used before. Somehow it still got it wrong, and just lifted the absurd answer from the article rather than the part where the article says it’s very wrong.
Amusingly, they now refuse to show an AI answer for that particular search.
deadbabe · 49m ago
> The overviews are also wrong and difficult to get fixed.
Let’s not pretend that some websites aren’t straight up bullshit.
There’s blogs spreading bullshit, wrong info, biased info, content marketing for some product etc.
And lord knows comments are frequently wrong, just look around Hackernews.
I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”.
mvdtnz · 31m ago
When asking a question do you not see a difference between
1. Here's the answer (but it's misinformation)
2. Here are some websites that look like they might have the answer
?
owlstuffing · 55m ago
> The overviews are also wrong and difficult to get fixed.
No different from Google search results.
mtkd · 1h ago
Conversely, it's useful to get an immediate answer sometimes
6 months ago, "what temp is pork safe at?" was a few clicks, long SEO optimised blog post answers and usually all in F not C ... despite Google knowing location ... I used it as an example at the time of 'how hard can this be?'
First sentance of Google AI response right now: "Pork is safe to eat when cooked to an internal temperature of 145°F (63°C)"
ncallaway · 1h ago
Dear lord please don’t use an AI overview answer for food safety.
If you made a bet with your friend and are using the AI overview to settle it, fine. But please please click on an actual result from a trusted source if you’re deciding what temperature to cook meat to
sothatsit · 38m ago
The problem is that SEO has made it hard to find trustworthy sites in the first place. The places I trust the most now for getting random information is Reddit and Wikipedia, which is absolutely ridiculous as they are terrible options.
But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though.
jordanb · 7m ago
Google could have cut down on this if they wanted. And in general they did until they fired Matt Cutts.
The reality is, every time someone's search is satisfied by an organic result is lost revenue for Google.
zahlman · 1h ago
I've been finding that the proliferation of AI slop is at its worst on recipe/cooking/nutrition sites, so....
brookst · 1h ago
Safe Temperatures for Pork
People have been eating pork for over 40,000 years. There’s speculation about whether pork or beef was first a part of the human diet.
(5000 words later)
The USDA recommends cooking pork to at least 145 degrees.
Most of the time if there isn't a straightforward primary source in the top results, Google's AI overview won't get it right either.
Given the enormous scale and latency constraints they're dealing with, they're not using SOTA models, and they're probably not feeding the model 5000 words worth of context from every result on the page.
ImaCake · 1h ago
Not only that, it includes a link to the USDA reference so you can verify it yourself. I have switched back to google because of how useful I find the RAG overviews.
wat10000 · 1h ago
The link is the only useful part, since you can’t trust the summary.
Maybe they could just show the links that match your query and skip the overview. Sounds like a billion-dollar startup idea, wonder why nobody’s done it.
owenversteeg · 41m ago
It’s a pretty good billion dollar idea, I think you’ll do well. In fact I bet you’ll make money hand over fist, for years. You could hire all the best engineers and crush the competition. At that point you control the algorithm that everyone bases their websites on, so if you were to accidentally deploy a series of changes that incentivized low quality contentless websites… it wouldn’t matter at all; not your problem. Now that the quality of results is poor, but people still need their queries answered, why don’t you provide them the answers yourself? You could keep all the precious ad revenue that you previously lost when people clicked on those pesky search results.
krupan · 1h ago
This should be the top comment! Thank you for posting it because I'm starting to worry that I'm the only one who realizes how ridiculous this all is.
It doesn't take long to find SEO slop trying to sell you something:
When our grandmothers and grandfathers were growing up, there was a real threat to their health that we don’t face anymore. No, I’m not talking about the lack of antibiotics, nor the scarcity of nutritious food. It was trichinosis, a parasitic disease that used to be caught from undercooked pork.
The legitimate worry of trichinosis led their mothers to cook their pork until it was very well done. They learned to cook it that way and passed that cooking knowledge down to their offspring, and so on down to us. The result? We’ve all eaten a lot of too-dry, overcooked pork.
But hark! The danger is, for the most part, past, and we can all enjoy our pork as the succulent meat it was always intended to be. With proper temperature control, we can have better pork than our ancestors ever dreamed of. Here, we’ll look at a more nuanced way of thinking about pork temperatures than you’ve likely encountered before."
Sorry, what temperature was it again?
Luckily there's the National Pork Board which has bought its way to the top, just below the AI overview. So this time around I won't die from undercooked pork at least.
hansvm · 1h ago
As of a couple weeks ago it had a variety of unsafe food recommendations regarding sous vide, e.g. suggesting 129F for 4+ hours for venison backstrap. That works great some of the time but has a very real risk of bacterial infiltration (133F being similar in texture and much safer, or 2hr being a safer cook time if you want to stick to 129F).
Trust it if you want I guess. Be cautious though.
mitthrowaway2 · 1h ago
Google's search rankings are also the thing driving those ridiculous articles to the top, which is the only reason so many of them get written...
ljlolel · 1h ago
And also why they incentivized all this human written training data that will no longer be incentivized
sgentle · 1h ago
"full moon time NY"
> The next full moon in New York will be on August 9th, 2025, at 3:55 a.m.
"full moon time LA"
> The next full moon in Los Angeles will be on August 9, 2025, at 3:55 AM PDT.
I mean, it certainly gives an immediate answer...
squigz · 31m ago
I wonder how people have such awful experiences with (traditional) Google when I don't and really never have.
It’s only useful if you can trust it, and you very much cannot.
I know you can’t necessarily trust anything online, but when the first hit is from the National Pork Board, I’m confident the answer is good.
h4kunamata · 7m ago
Not any AI tho.
I have replaced SEO with Perplexity AI only.
It isn't a chatbot but it actually search for what you are looking for and most importantly, it shows all the sources it used.
Depending on the question I can get anywhere from 10 to 40 sources.
No other AI service provides that, they use the data from their training model only which in my experience, is full of errors, incomplete, cannot answer or altogether.
yalogin · 10m ago
For the most part there really is no need to use search in the traditional sense for knowledge. For information it’s still the only because llms are not reliable. But ChatGPT must have taken a huge dent in google’s traffic.
ghushn3 · 5h ago
I subscribe to Kagi. It's been worth it to have no ads and the ability to uprank/downrank sites.
And there's no AI garbage sitting in the top of the engine.
slau · 5h ago
You can opt-in to get an LLM response by phrasing your queries as a question.
Searching for “who is Roger rabbit” gives me Wikipedia, IMDb and film site as results.
Searching for “who is Roger rabbit?” gives me a “quick answer” LLM-generated response: “Roger Rabbit is a fictional animated anthropomorphic rabbit who first appeared in Gary K. Wolf's 1981 novel…” followed by a different set of results. It seems the results are influenced by the sources/references the LLM generated.
abtinf · 1h ago
You don’t have to phrase it as a question; just append a ?, which is an operator telling it you want a generated answer.
greatgib · 2h ago
I don't think that you are right. It is the search result that influence the llm generated result and not the opposite.
In your case, I think that it is just the interrogation point in itself at the end that somehow has an impact on the results you see.
s900mhz · 2h ago
It’s a feature of Kagi. Putting the question mark does invoke AI summaries.
Thanks for the suggestion. I try nonstandard search engines now and then and maybe this one will stick. Google certainly is trying their best to encourage me.
stevenAthompson · 26m ago
I subscribe also, and prefer it for most things.
However, it's pretty bad for local results and shopping. I find that anytime I need to know a local stores hours or find the cheapest place to purchase an item I need to pivot back to google. Other than that it's become my default for most things.
standardUser · 5h ago
I'm more interested now than ever. A lot of my time spent searching is for obscure or hard-to-find stuff, and in the past smaller search engines were useless for this. But most of my searches are quick and the primary thing slowing me down are Google product managers. So maybe Kagi is worth a try?
ghushn3 · 3h ago
You can try it for free. I did my 300 searches on it and went, "Yep. This is better." and then converted to a paid user.
Melatonic · 1h ago
It's awesome - highly recommend trying it
voltaireodactyl · 5h ago
I think you might be happily surprised for sure.
dyauspitr · 6m ago
It’s useless. Worse than Google and no AI summary.
outlore · 1h ago
is there a way to make safari search bar on iOS show the kagi search term rather than the URL?
bgwalter · 6h ago
You can apparently disable these annoying and useless "AI" overviews by cursing in the query:
It's relatively straightforward to create a firefox alternate search engine which defaults to the "web" tab of Google search results which is mostly free of Google-originated LLM swill.
Or just append with -ai => "how to pick a running shoe -ai"
x0x0 · 5h ago
appending a -"fuck google #{insert slur of choice here}" to my search results has improved them. Then I wonder why I do this to myself and ponder going back to kagi.
privatelypublic · 1h ago
Jesus dude. Just use the udm options instead of practicing slurs.
oezi · 6h ago
The tricky thing for Google will be to do this and not kill their cash cow ad business.
kozikow · 5h ago
Ads inside LLMs (e.g. pay $ to boost your product in LLM recommendation) is going to be a big thing.
My guess is that Google/OpenAI are eyeing each other - whoever does this first.
Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.
msgodel · 1h ago
People are already wary of hosted LLMs having poisoned training data. That might kill them altogether and push everyone to using eg Qwen3-coder.
landl0rd · 34m ago
No, a small group of highly tech-literate people are wary of this. Your personal bubble is wary of this. So is some of mine. "People" don't care and will use the packaged, corporate, convenient version with the well-known name.
People who are aware of that and care enough to change consumption habits are an inconsequential part of the market.
msgodel · 7m ago
I don't know, a bunch of the older people from the town I grew up in avoided using LLMs until Grok came out because of what they saw going on with alignment in the other models (they certainly couldn't articulate this but listening to what said it's what they were thinking.) Obviously Grok has the same problems but I think it goes to show the general public is more aware of the issue than they get credit for.
You combine this with Apple pushing on device inference and making it easy and anything like ads probably will kill hosted LLMs for most consumers.
pryelluw · 5h ago
Not tricky at all.
This is a new line of business that provides them with more ad space to sell.
If the overview becomes a trusted source of information, then all they need to do is inject ads in the overviews. They already sort of dye that. Imagine it as a sort of text based product placement.
NoPicklez · 2h ago
I'd say putting ads into AI search overviews is absolutely tricky.
You might think that's the correct way to do it, but there is likely much more to it than it seems.
If it wasn't tricky at all you'd bet they would've done it already to maximize revenue.
pryelluw · 2h ago
Product teams in big companies move slow. But soon enough all the shit ads are going to pop up.
stevenAthompson · 20m ago
> If the overview becomes a trusted source of information
It never will. By disincentivizing publishers they're stripping away most of the motivation for the legitimate source content to exist.
AI search results are a sort of self-cannibalism. Eventually AI search engines will only have what they cached before the web became walled gardens (old data), and public gardens that have been heavily vandalized with AI slop (bad data).
Gigachad · 4h ago
I’d guess that the searches where AI overviews are useful and the searches where companies are buying ads are probably fairly distinct. If you search for plumbers near you, they won’t show an AI overview, while if you search “Why are plants green?”, no one was buying ads on that.
ahartmetz · 2h ago
Somebody is working on "native advertising" in AI slop, surely? Barf.
bethekidyouwant · 1h ago
you can’t make the slop have a nice clean ad in it. Also: as soon as your slop has ads in it, I’m going to make Polpot advertise your product.
dado3212 · 6h ago
Related, but to whichever PM put the "AI Mode" on the far left side of the toolbar, thus breaking the muscle memory from clicking "All" to get back from "Images", I expect some thanks for unintentionally boosting your CTR metrics.
fsh · 5h ago
That decision probably paid someone's new car. The KPIs will be excellent. Who cares about what the users might have wanted to do with their clicks.
JKCalhoun · 5h ago
Liberating me from "search clicks" is not a bad thing at all. I suspect many of us though don't even go to <search engine> anyway but ask an LLM directly.
achierius · 5h ago
It's fundamentally self-destructive though. In time, the sites which rely on search clicks for revenue will essentially cease to be paid for their work, and in many cases will therefore stop publishing the high-quality material that you're looking for.
JKCalhoun · 4h ago
I assumed that, after having using LLMs myself increasingly, that LLM's killing search was inevitable anyway. Further I assume that Google recognizes it as well and would rather at least remain somewhat relevant?
Google search, as others have mentioned in this thread, increasingly fails to give me high-quality material anyway. Mostly it's just pages of SEO spam. I prefer that the LLM eat that instead of me (just spit back up the relevant stuff, thankyouverymuch).
Honestly though, increasingly the internet for me is 1) a distraction from doing real work 2) YouTube (see 1) and 3) a wonderful library called archive.org (which, if I could grab a local snapshot would make leaving the internet altogether much, much easier).
landl0rd · 29m ago
Most of the time when I find a good answer from search it's one of a few things:
- Hobbyist site
- Forum or UGC
- Academic/gov
- Quality news which is often paywalled
Most of that stuff doesn't depend on ad clicks. The things that do depend on ad clicks are usually infuriating slop. I refuse to scroll through three pages of BS to get to the information I want.
Nursie · 1h ago
I'm intrigued as to why someone, presumably tech-savvy, would do that?
We know they aren't oracles and come up with a lot of false information in response to factual questions.
telchior · 58m ago
In my view it's a pretty straightforward calculation. Nothing is free, no knowledge is instant. Start off knowing your time investment to learn anything is greater than zero and go from there..
If you do a Google (or other engine) search, you have to invest time pawing through the utter pile of shit that Google ads created on the web. Info that's hidden under reams of unnecessary text, potentially out of date, potentially not true; you'll need to evaluate a list of links and, probably, open multiple of them.
If you do an AI "search", you ask one question and get one answer. But the answer might a hallucination or based on incorrect info.
However, a lot of the time, you might be searching for something you already have an idea of, whether it's how to structure a script or what temperature pork is safe at; you can use your existing knowledge to assess the AI's answer. In that case the AI search is fast.
The rest of the time, you can at least tell the AI to include links to its references, and check those. Or its answer may help you construct a better Google search.
Ultimately search is a trash heap of Google's making, and I have absolute confidence in them also turning AI into a trash heap, but for now it is indeed faster for many purposes.
krupan · 1h ago
I feel like the discussion here is missing the point. It doesn't matter if the AI overview is correct or not, it doesn't matter if you can turn it off or not. People are using it instead of visiting actual websites. Google has copied the entire World Wide Web into their LLM and now people not using the web anymore! We have bemoaned the fact that Facebook and Twitter replaced most of the web for most people, but now it's not even those, it's a single LLM owned and controlled by a single corporation.
landl0rd · 31m ago
Is there an appreciable difference between a company that controls what information is surfaced via pagerank and one that does so via LLM?
Remember the past scandals with google up/downranking various things? This isn't a new problem. Wrt how the average person gets information google doesn't really have more control because people aren't clicking through as much.
yfw · 5h ago
Maybe if the search wasnt full of ads and scams
throwawayoldie · 32m ago
...the same content that the AI was trained on, you mean?
awakeasleep · 14m ago
Thats not a real rebuttal.
First, in the pre training stage humans curate and filter the data thats actually used for training.
Then in the fine tuning stage people write ideal examples to teach task performance
Then there is reinforcement learning from human feedback RLHF where people rank multiple variations of the answer an AI gives, and thats part of the reinforcement loop
So there is really quite a bit of human effort and direction that goes into preventing the garbage-in garbage-out type situation you're referring to
landl0rd · 24m ago
At least they're not thrown in my face and appearing as eighteen pop-ups, notification requests, account-walls, SEOslop, and a partridge in a pear tree.
wkat4242 · 6h ago
To be fair, Google's actual search couldn't be much worse than it was lately. It's like they really try to get all the spam, clickbait and scams right at the top.
The AI overview sucks but it can't really be a lot worse than that :)
ValveFan6969 · 4h ago
The clicks in question: "Here's a thirty page story of how grandma discovered this recipe... BTW you need to subscribe/make an account/pay to view the rest of the article!"
maxdo · 4h ago
Pay per click model
Should die , it’s really ugly world where you need to fight through loads of ads to get tiny bit of information.
People will go to museums to see how complicated pre-ai era was
Gigachad · 4h ago
Yep, there’s so much hate for people who don’t read past the headline, but if you actually click on the articles the websites are almost unusable.
skywhopper · 6h ago
Which is of course Google’s short-sighted goal. See also their push to switch to full “AI mode” search which doesn’t show results at all.
thewebguyd · 6h ago
It's a weird goal to me. Like, what's their end game here? Offer to manipulate the AI responses for ad money? Product placement in the summaries? I would hope those placements have to be disclosed as advertising, and it would immediately break trust in anything their AI outputs so surely that would only continue to harm them in the long run, no?
~57% of their revenue is from search advertising. How do they plan on replacing that?
mrheosuper · 56m ago
AI subscription would be my guess. Want better model ? Open your wallet.
xt00 · 6h ago
Yea it is tricky for them -- the old model of "search, see google text / link ad, scroll, click website, scroll, see some ads on that page as well, done" will be replaced with "search, see google text / link ad, read AI result, 'and here are some relevant websites'" -- where all of the incentives there will be to "go into more depth" on the websites that are linked there.
bugsMarathon88 · 3h ago
Total behavioral control through the augmentation of senses, emotions and all other sensibilities. Such political power is significantly more valuable than mere revenue.
landl0rd · 27m ago
Okay I think the question is still how they plan to convert this into cash, because political power can't buy food or pay your employees or be stored and quantified simply, which is why we invented money. Assuming this dystopian scenario is correct.
flashgordon · 6h ago
So youd be surprised and scared - the Ad PMs I know are totally salivating at this. Their angle is "SEO is no more - it is GEO now". GenAI Engine Optimization. Welcome to the Futurama Internet Future!
EarlKing · 4h ago
"Futurama does not endorse the COOOOOOL crime of fraudulent misrepresentation!"
Seriously, Futurama and Cyberpunk and 1984 were all supposed to be warnings... not how-to manuals.
HPsquared · 6h ago
I've seen blatantly wrong stuff in that overview too many times, I just ignore it now.
Jare · 5h ago
To be fair, the actual results are often even worse. I'm pretty sure we're close to the point where our favorite AI prompt replaces classic googling. While it will get a lot of the answer wrong, it will lead to the right result faster than plain searches. If nothing else, because refining our search at the AI prompt will be way easier than in classic google. Google knows and needs to stay on top of this paradigm change, but I guess doesn't know how to monetize AI search yet so it doesn't want to force the change (yet).
throaway5454 · 2h ago
Ai used in this way is going to replace gui as we know it. Why click when you can just tell ai what you want to to do.
thejohnconway · 1h ago
Because I usually don’t want to talk to computers in front of other people? It isn’t that it feels silly, but that it’s incredibly distracting for everyone to hear every interaction you have with a computer. This is true even at home.
Maybe we can type the commands, but that is also quite slow compared with tapping/clicking/scrolling etc.
Nursie · 1h ago
Sometimes search results don't contain the info you need, sometimes they are SEO-spam, or a major topic adjacent to what you need to know about floods the results.
But they're not often confidently wrong like AI summaries are.
ahartmetz · 2h ago
I recently typo'd something and the AI box just fabricated a semi-plausible story about how the ancient X did Y.
blibble · 4h ago
no reason not to block Googlebot now...
j45 · 5h ago
This means searches are still happening, just being routed elsewhere?
I noticed Google's new AI summary let's me click on a link in the summary and the links are posted to the right.
Those clicks are available, might not be discovered yet, curious though if those show up anywhere as data.
Google being able to create summaries off actual web search results will be an interesting take compared to other models trying to get the same done without similar search results at their disposal.
The new search engine could be google doing the search and compiling the results for us how we do manually.
thewebguyd · 5h ago
> Google being able to create summaries off actual web search results will be an interesting take compared to other models trying to get the same done without similar search results at their disposal.
And may get them in some anti-trust trouble once publishers start fighting back, similar to AMP, or their thing with Genius and song lyrics. Turns out site owners don't like when Google takes their content and displays it to users without forcing said users to click through to the actual website.
ars · 5h ago
The AI overview doesn't (for me) cause a big drop in clicking on sites.
But AI as a product most certainly does! I was trying to figure out why a certain AWS tool stopped working, and Gemini figured it out for me. In the past I would have browsed multiple forums to figure out it.
Google AI has been listing incorrect internal extensions causing departments to field calls for people trying to reach unrelated divisions and services, listing times and dates of events that don't exist at our addresses that people are showing up to, and generally misdirecting and misguiding people who really need correct information from a truth source like our websites.
We have to track each and every one of these problems down, investigate and evaluate whether we can reproduce them, give them a "thumbs down" to then be able to submit "feedback", with no assurance it will be fixed in a timely manner and no obvious way to opt ourselves out of it entirely. For something beyond our consent and control.
It's worse than when Google and Yelp would create unofficial business profiles on your behalf and then held them hostage until you registered with their services to change them.
Every so often I get lost in the docs trying to do something that actually isn't supported (the library has some glaring oversights) and I'll search on Google to see if anyone else came up with a similar problem and solution on a forum or something.
Instead of telling me "that isn't supported" the AI overview instead says "here's roughly how you would do it with libraries of this sort" and then it would provide a fictional code sample with actual method names from the documentation, except the comments say the method could do one thing, but when you check the documentation to be sure, it actually does something different.
It's a total crapshoot on any given search whether I'll be saving time or losing it using the AI overview, and I'm cynically assuming that we are entering a new round of the Dark Ages.
- Looking up a hint for the casino room in the game "Blue Prince", the AI summary gave me details of the card games on offer at the "Blue Prince Casino" in the next suburb over. There is no casino there.
- Looking up workers rights during a discussion of something to do with management, it directly contradicted the legislation and official government guidance.
I can't imagine how frustrating it must be for business-owners, or those providing information services to find that their traffic is intercepted and their potential visitors treated to an inaccurate version on the search page.
At some point, an article about how Google was showing this crap made it to the top of the rankings and they started taking the overview from it rather than the original Quora answer it used before. Somehow it still got it wrong, and just lifted the absurd answer from the article rather than the part where the article says it’s very wrong.
Amusingly, they now refuse to show an AI answer for that particular search.
Let’s not pretend that some websites aren’t straight up bullshit.
There’s blogs spreading bullshit, wrong info, biased info, content marketing for some product etc.
And lord knows comments are frequently wrong, just look around Hackernews.
I’d bet that LLMs are actually wrong less often than typical search results, because they pull from far greater training data. “Wisdom of the crowds”.
1. Here's the answer (but it's misinformation) 2. Here are some websites that look like they might have the answer
?
No different from Google search results.
6 months ago, "what temp is pork safe at?" was a few clicks, long SEO optimised blog post answers and usually all in F not C ... despite Google knowing location ... I used it as an example at the time of 'how hard can this be?'
First sentance of Google AI response right now: "Pork is safe to eat when cooked to an internal temperature of 145°F (63°C)"
If you made a bet with your friend and are using the AI overview to settle it, fine. But please please click on an actual result from a trusted source if you’re deciding what temperature to cook meat to
But SEO slop machines have made it so hard to find the good websites without putting in more legwork than makes sense a lot of the time. Funnily enough, this makes AI look like a good option to cut through all the noise despite its hallucinations. That's obviously not acceptable when it comes to food safety concerns though.
The reality is, every time someone's search is satisfied by an organic result is lost revenue for Google.
People have been eating pork for over 40,000 years. There’s speculation about whether pork or beef was first a part of the human diet.
(5000 words later)
The USDA recommends cooking pork to at least 145 degrees.
First result under the overview is the National Pork Board, shows the answer above the fold, and includes visual references: https://pork.org/pork-cooking-temperature/
Most of the time if there isn't a straightforward primary source in the top results, Google's AI overview won't get it right either.
Given the enormous scale and latency constraints they're dealing with, they're not using SOTA models, and they're probably not feeding the model 5000 words worth of context from every result on the page.
Maybe they could just show the links that match your query and skip the overview. Sounds like a billion-dollar startup idea, wonder why nobody’s done it.
When our grandmothers and grandfathers were growing up, there was a real threat to their health that we don’t face anymore. No, I’m not talking about the lack of antibiotics, nor the scarcity of nutritious food. It was trichinosis, a parasitic disease that used to be caught from undercooked pork.
The legitimate worry of trichinosis led their mothers to cook their pork until it was very well done. They learned to cook it that way and passed that cooking knowledge down to their offspring, and so on down to us. The result? We’ve all eaten a lot of too-dry, overcooked pork.
But hark! The danger is, for the most part, past, and we can all enjoy our pork as the succulent meat it was always intended to be. With proper temperature control, we can have better pork than our ancestors ever dreamed of. Here, we’ll look at a more nuanced way of thinking about pork temperatures than you’ve likely encountered before."
Sorry, what temperature was it again?
Luckily there's the National Pork Board which has bought its way to the top, just below the AI overview. So this time around I won't die from undercooked pork at least.
Trust it if you want I guess. Be cautious though.
> The next full moon in New York will be on August 9th, 2025, at 3:55 a.m.
"full moon time LA"
> The next full moon in Los Angeles will be on August 9, 2025, at 3:55 AM PDT.
I mean, it certainly gives an immediate answer...
First result: https://www.porkcdn.com/sites/porkbeinspired/library/2014/06...
Second result: https://pork.org/pork-cooking-temperature/
I know you can’t necessarily trust anything online, but when the first hit is from the National Pork Board, I’m confident the answer is good.
I have replaced SEO with Perplexity AI only. It isn't a chatbot but it actually search for what you are looking for and most importantly, it shows all the sources it used.
Depending on the question I can get anywhere from 10 to 40 sources. No other AI service provides that, they use the data from their training model only which in my experience, is full of errors, incomplete, cannot answer or altogether.
And there's no AI garbage sitting in the top of the engine.
Searching for “who is Roger rabbit” gives me Wikipedia, IMDb and film site as results.
Searching for “who is Roger rabbit?” gives me a “quick answer” LLM-generated response: “Roger Rabbit is a fictional animated anthropomorphic rabbit who first appeared in Gary K. Wolf's 1981 novel…” followed by a different set of results. It seems the results are influenced by the sources/references the LLM generated.
In your case, I think that it is just the interrogation point in itself at the end that somehow has an impact on the results you see.
https://help.kagi.com/kagi/ai/quick-answer.html
However, it's pretty bad for local results and shopping. I find that anytime I need to know a local stores hours or find the cheapest place to purchase an item I need to pivot back to google. Other than that it's become my default for most things.
https://arstechnica.com/google/2025/01/just-give-me-the-fing...
Instructions are here: https://support.mozilla.org/en-US/kb/add-custom-search-engin...
The "URL with %s in place of search term" to add is:
https://www.google.com/search?q=%s&client=firefox-b-d&udm=14
My guess is that Google/OpenAI are eyeing each other - whoever does this first.
Why would that work? It's a proven business model. Example: I use LLMs for product research (e.g. which washing machine to buy). Retailer pays if link to their website is included in the results. Don't want to pay? Then redirect the user to buy it on Walmart instead of Amazon.
People who are aware of that and care enough to change consumption habits are an inconsequential part of the market.
You combine this with Apple pushing on device inference and making it easy and anything like ads probably will kill hosted LLMs for most consumers.
This is a new line of business that provides them with more ad space to sell.
If the overview becomes a trusted source of information, then all they need to do is inject ads in the overviews. They already sort of dye that. Imagine it as a sort of text based product placement.
You might think that's the correct way to do it, but there is likely much more to it than it seems.
If it wasn't tricky at all you'd bet they would've done it already to maximize revenue.
It never will. By disincentivizing publishers they're stripping away most of the motivation for the legitimate source content to exist.
AI search results are a sort of self-cannibalism. Eventually AI search engines will only have what they cached before the web became walled gardens (old data), and public gardens that have been heavily vandalized with AI slop (bad data).
Google search, as others have mentioned in this thread, increasingly fails to give me high-quality material anyway. Mostly it's just pages of SEO spam. I prefer that the LLM eat that instead of me (just spit back up the relevant stuff, thankyouverymuch).
Honestly though, increasingly the internet for me is 1) a distraction from doing real work 2) YouTube (see 1) and 3) a wonderful library called archive.org (which, if I could grab a local snapshot would make leaving the internet altogether much, much easier).
- Hobbyist site
- Forum or UGC
- Academic/gov
- Quality news which is often paywalled
Most of that stuff doesn't depend on ad clicks. The things that do depend on ad clicks are usually infuriating slop. I refuse to scroll through three pages of BS to get to the information I want.
We know they aren't oracles and come up with a lot of false information in response to factual questions.
If you do a Google (or other engine) search, you have to invest time pawing through the utter pile of shit that Google ads created on the web. Info that's hidden under reams of unnecessary text, potentially out of date, potentially not true; you'll need to evaluate a list of links and, probably, open multiple of them.
If you do an AI "search", you ask one question and get one answer. But the answer might a hallucination or based on incorrect info.
However, a lot of the time, you might be searching for something you already have an idea of, whether it's how to structure a script or what temperature pork is safe at; you can use your existing knowledge to assess the AI's answer. In that case the AI search is fast.
The rest of the time, you can at least tell the AI to include links to its references, and check those. Or its answer may help you construct a better Google search.
Ultimately search is a trash heap of Google's making, and I have absolute confidence in them also turning AI into a trash heap, but for now it is indeed faster for many purposes.
Remember the past scandals with google up/downranking various things? This isn't a new problem. Wrt how the average person gets information google doesn't really have more control because people aren't clicking through as much.
First, in the pre training stage humans curate and filter the data thats actually used for training.
Then in the fine tuning stage people write ideal examples to teach task performance
Then there is reinforcement learning from human feedback RLHF where people rank multiple variations of the answer an AI gives, and thats part of the reinforcement loop
So there is really quite a bit of human effort and direction that goes into preventing the garbage-in garbage-out type situation you're referring to
The AI overview sucks but it can't really be a lot worse than that :)
People will go to museums to see how complicated pre-ai era was
~57% of their revenue is from search advertising. How do they plan on replacing that?
Seriously, Futurama and Cyberpunk and 1984 were all supposed to be warnings... not how-to manuals.
Maybe we can type the commands, but that is also quite slow compared with tapping/clicking/scrolling etc.
But they're not often confidently wrong like AI summaries are.
I noticed Google's new AI summary let's me click on a link in the summary and the links are posted to the right.
Those clicks are available, might not be discovered yet, curious though if those show up anywhere as data.
Google being able to create summaries off actual web search results will be an interesting take compared to other models trying to get the same done without similar search results at their disposal.
The new search engine could be google doing the search and compiling the results for us how we do manually.
And may get them in some anti-trust trouble once publishers start fighting back, similar to AMP, or their thing with Genius and song lyrics. Turns out site owners don't like when Google takes their content and displays it to users without forcing said users to click through to the actual website.
But AI as a product most certainly does! I was trying to figure out why a certain AWS tool stopped working, and Gemini figured it out for me. In the past I would have browsed multiple forums to figure out it.