Google says AI in Search is driving more queries and higher quality clicks

35 thm 50 8/6/2025, 5:32:42 PM blog.google ↗

Comments (50)

extr · 2h ago
I can believe this. A lot of my google search usage now is something like:

> "what is the type of wrench called for getting up into tight spaces"

> AI search gives me an overview of wrench types (I was looking for "basin wrench")

> new search "basin wrench amazon"

> new search "basin wrench lowes"

> maps.google.com "lowes"

Notably, the information I was looking for was general knowledge. The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting LLM-generated answers for me to find. These websites don't really need to exist now. I'm happy to funnel 100% of my traffic to websites that are representing real companies offering real services/info (ship me a wrench, sell me a wrench, show me a video on how to use the wrench, etc).

thewebguyd · 2h ago
> The only people "losing out" here are people running SEO-spammish websites that themselves (at this point) are basically hosting webpages containing LLM-generated answers for me to find.

Agreed. The web will be better off for everyone if these sites die out. Google is what brought these into existence in the first place, so I find it funny Google is now going to be one of the ones helping to kill them. Almost like they accidentally realized SEO got out of control so they have to fix their mistake.

extr · 1h ago
At one point these SEO pages were in fact providing a real service, and you could view them as a sort of "manual", prototypical, distributed form of AI. Millions of people trying to understand what information was valuable and host webpages to satisfy the demand for that info, and get rewarded for doing so. It obviously went too far, but at one point, it did make sense to allow these websites to proliferate. I know without AI, I probably just would have clicked on the first link that said "types of wrenches" and read a little bit. I probably would have gotten my answer, it just wouldn't have been quite as laser-targeted to my exact question.
thewebguyd · 36m ago
True, the early days these sites were genuinely helpful. The monetization model was a little different though which is what I think kept them useful. You'd use the content just to drive traffic, which would result in ad clicks on your banner ads, etc.

Then "content marketing" took over, and the content itself was now also used to sell a product or service, sort of an early form of influencer marketing and that is when I think it all started to go down hill. We stopped seeing the more in depth content which actually taught something, and more surface level keywords that were just used to drive you to their product/service.

OTOH, the early web was also full of niche forums, most viewable without an account and indexable, of about any topic you could imagine where you could interact with knowledgeable folks in that niche. Google would have been more helpful to users by surfacing more of those forums vs. the blogs.

Those forums are IMO the real loss here. Communities have moved into discord, or another closed platform that doesn't appear on the web, and many that require accounts or even invitations to just view read only.

nicbou · 38m ago
Hard disagree. I put a great deal of work into my website, putting hard-earned information on the internet for the first time. Now Google reaps all the value I create without as much as a "thank you".
awongh · 31m ago
Unfortunately the new victims in the system of LLMs and the way they distribute knowledge is the specialized content website.

Now an LLM just knows all the content you painstakingly gathered on your site. (It could also be, and is likely that it was also collected from other hard to find sites across the internet).

The original web killed off the value of a certain kind of knowledge (encyclopedias, etc.) and LLMs will do the same.

There are plenty of places to place the blame, but this is a function of any LLM, and a funcamental way LLMs work, not just a problem created and profited from by Google- for example the open-weight models, where no-one is actually profiting directly.

uncertainrhymes · 2h ago
From the article:

People are also more likely to click into web content that helps them learn more — such as an in-depth review, an original post, a unique perspective or a thoughtful first-person analysis

So... not the blog spam that was previously prioritized by Google Search? It's almost as if SEO had some downsides they are only just now discovering.

jmathai · 1h ago
Well:

1) Clicking on search results doesn't bring $ to Google and takes users off their site. Surely they're thinking of ways to address this. Ads?

2) Having to click off to another site to learn more is really a deficiency in the AI summary. I'd expect Google would rather you to go into AI mode where they control the experience and have more opportunities to monetize. Ads?

We are in the "early uber" and "early airbnb" days ... enjoy it while it's great!

hiAndrewQuinn · 2h ago
SEO and quality content are goals that should slowly align with time, so I consider this convergence very welcome.
nerdjon · 2h ago
Yeah I really don't believe that this is really the case, especially when we had a report recently saying clicks are down.

It has become shockingly common to see people sharing a screenshot of an AI response as evidence to back up their argument. I was once busy with something so I asked my partner if he could look up something for me, he confidently shared a screenshot of an AI response from Google. It of course was completely wrong and I had to do my own searching anyways (annoyingly needing to scroll past and ignore the AI response that kept trying to tell me the same wrong information).

We have to remember that google is incentivized to keep people on Google more. Their ability to just summarize stuff instead of getting people off of google as quickly as possible is a gold mine for them, of course they are going to push it as hard as possible.

9rx · 2h ago
> especially when we had a report recently saying clicks are down.

Isn't that expected from "higher quality clicks"?

amarcheschi · 2h ago
If I search "why is rum healthy", ai overview tells this, which is... Laughable: While not a health drink, moderate consumption of rum may offer some potential benefits, primarily due to its antioxidant content and potential to improve circulation and reduce stress. Darker rums, in particular, contain higher levels of antioxidants from the aging process in wooden barrels, which can help neutralize free radicals. Additionally, rum may have a relaxing effect and can be a social lubricant, potentially reducing stress and promoting relaxation when consumed in moderation.
nerdjon · 1h ago
So I was curious, in normal google search (with the AI summary) I put in "why is rum healthy" I got this, and then it listed a bunch of benefits: "Rum, when consumed in moderation, may offer a few potential health benefits. These include possible positive effects on heart health due to its potential to increase HDL (good) cholesterol, and the presence of antioxidants in darker rums, which may be beneficial."

But if I just simply remove the "why" it clearly states "Rum is an alcoholic beverage that does not have any significant health benefits."

Man I love so much that we are pushing this technology that is clearly just "garbage in, garbage out".

Side Note: totally now going to tell my doctor I have been drinking more rum next time we do blood work if my good cholesterol is still a bit low. I am sure he is going to be thrilled. I wonder if I could buy rum with my HSA if I had a screenshot of this response... (\s if really necessary)

mattmaroon · 1h ago
Well, both of those are arguments humans have repeated quite a bit. The first one is pretty weak (and you can guess who funded the “science” behind it) but it is believed by many.

Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.

nerdjon · 1h ago
> Asking AI to tell reality from fiction is a bit much when the humans it gets its info from can’t, but this is at least not ridiculous.

I agree with that, but the problem is that it is being positioned as a reliable source of information. And is being treated as such. Google's disclaimer "AI responses may include mistakes. Learn more" only shows up if you click the button to show more of the response, is smaller text, a light gray, and clearly overshadowed by the button with lights rotating around it to do a deep dive.

The problem is just how easy it is to "lead on" one of these models. By simply stating a search like "why is rum healthy" implies that I already think it is healthy so of course it leads into that but that is why this is so broken. But "is rum healthy" actually provides a more factual answer:

> Rum is an alcoholic beverage that does not have any significant health benefits. While some studies have suggested potential benefits, such as improved blood circulation and reduced risk of heart disease, these findings are often based on limited evidence and have not been widely accepted by the medical community.

extr · 2h ago
Why is that laughable? Rum isn't a health drink, but if you were looking for information to support the case that it has some health benefits (which is literally the search term)...seems like a reasonable answer. What did you expect? A moralistic essay on how alcohol is bad?
apwell23 · 1h ago
there is no antioxidant health benefits from rum. how is making stuff up reasonable.
nsonha · 55m ago
people make stuff up and post online. You will find made up shit with or without AI with that kind of query. So yes, it's reasonable that AI exposes you to the real Internet, and it's doing, at worst, as good a job as search engines.
apwell23 · 38m ago
> reasonable that AI exposes you to the real Internet,

Any response will be 'reasonable' by that standard.

pessimizer · 1h ago
A lot of people are desperate for AI to lecture to them from a position of authority, consider it broken when it doesn't, and start praying to it when it does.

edit: AI doesn't even have a corrupting, disgusting physical body, of course it should be recommending clean diets and clean spirits!

pryelluw · 2h ago
“Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.”

One can also interpret this as search was such shit that the summaries are allowing users to skip that horrible user experience.

They don’t care about discoverability. It’s all ads as quickly as possible. Coming soon is ad links in summaries. That’s what they’re getting to here.

panarchy · 1h ago
And they're searching more than ever because Google is failing to actually serve useful content that the user was looking for.
akazantsev · 42m ago
True. Googled "how to auto switch dark theme bootstrap". AI says in "versions 5.3 and newer, you can leverage the data-bs-theme attribute and JavaScript." and shows `data-bs-theme="auto"`.

This attribute exists, but this value comes from a bootstrap plugin that you have to install separately. It generated quite a few clicks and high-quality searches from me.

programmertote · 2h ago
I can't say for others, but this is what I do since Google integrated AI to the search results. For 80% of the time, I'd just type a question and read the AI summary and stop going further. For the other 20% or so when I believe deep diving is important, I'd scroll through results in the first page and click on a few of them to find out the "facts" myself.

The latter is what I used to do before AI summary was a thing, so I would logically assume that it should reduce the clicks to individual sites?

drudolph914 · 1h ago
I imagine it depends on the kind of search people are making.

if I just need a basic fact or specific detail from an article, and being wrong has no real world consequences, I'll probably just gamble it and take the AI's word for it most of the time. Otherwise I'm going to double check with an article/credible source

if anything, I think aimode from google has made it easier to find direct sources for what I need. A lot of the times, I am using AI for "tip of the tongue" type searches. I'll list a lot of information related to what I am trying to find, and the aimode does a great job of hunting it down for me

ultimately though, I do think some old aspects of google search are dying - some good, some bad.

Pros: don't fee the need to sift through blog spam, I don't need to scroll past paid search results, I can avoid the BS part of an article where someone goes through their entire life story before the actual content (I'm talking things like cooking website)

Cons: Google is definitely going to add ads to this tool at some point, some indie creators on the internet will have a harder time getting their name out.

my key takeaway from all this is that people will only stop at your site if they think your site will have something to offer that the AI can't offer. and this isn't new. people have been steeling blog content and turning into videos for ever. people will steel paid tutorials and release the content for free on a personal site. people will basically take content from site-X and repost in a more consumable format on site-Y. and this kind of theft is so obvious and no one liked seeing the same thing reposted a 1000 times. I think this long term is a win

nblgbg · 39m ago
I always wonder about this: What happens to their ads business? Also, what's the incentive for websites to provide data to Google if they’re not getting the incoming clicks? The generative approach seems to disincentivize both, right?
neilv · 1h ago
As a long-time AI+HCI person, I have mixed feelings about "AI", but just last night I was remarking to colleagues/friends that even I have mostly stopped clicking through from Google searches. The "AI" summary now usually plagiarizes a good enough answer.

I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.

I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.

"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.

Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.

HWR_14 · 1h ago
Google News was definitely doing this level of "summary" before ChatGPT. I'm don't think OpenAI-MS have fewer ethical qualms, just Google had more recent memories of the negative consequences.
artninja1988 · 2h ago
Isn't editorializing the titles against the rules?
LeoPanthera · 2h ago
Complaining about the submission is also against the guidelines, but what are you gonna do.
gundmc · 2h ago
Was the post edited? It looks like the submission title is exactly the blog post title except with "Google Says" appended.
internetter · 2h ago
"Google says" is editorializing. When others submit content from my blog they do not say "Evan Boehs says," they just take my title. Sometimes this leads to odd titles which I'm sure you notice from time to time, like product annoucements might be "Filibuster 3" and you're like "well what is Filibuster" but such is policy.
panarchy · 2h ago
Okay now compare it back to when Google search used to be good in like 2006 before it would serve you barely tangentially related crap and before being optimized to prioritize spam garbage and that could have been written by a monkey on a typewriter with a finite amount of time.
BeFlatXIII · 2h ago
Has the AI delisted geeks4geeks? That'd be a massive improvement.
andy99 · 2h ago
Counterpoint from yesterday.

https://news.ycombinator.com/item?id=44798215

From that article

  Mandatory AI summaries have come to Google, and they gleefully showcase hallucinations while confidently insisting on their truth. I feel about them the same way I felt about mandatory G+ logins when all I wanted to do was access my damn YouTube account: I hate them. Intensely.
But why listen to a third party when you can hear it from the horses mouth.
pollinations · 1h ago
Whether you believe the the article or not, the point you posted seems orthogonal to what google is saying.

They're not claiming anything about the quality of AI summaries. They are analyzing how traffic to external sites has been affected.

andy99 · 1h ago
The first paragraph in the article is

  With AI Overviews and more recently AI Mode, people are able to ask questions they could never ask before. And the response has been tremendous: Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.
kotaKat · 2h ago
I’m sure it is, Google, but can you at least give me a warning before you pull out and finish on my back this time when you release the next even more invasive portion of your AI assault on unwitting, unconsenting users? Thanks.

I’m sick of having to feel violated every step I take on the Web these days.

yifanl · 2h ago
Keeping in mind, turning off typo corrections would also drive more queries and higher quality clicks.
ms7892 · 2h ago
Any concrete data to support this claim by Google?
goopypoop · 1h ago
seeing how other people use search engines shakes my paradigms
DataDaemon · 2h ago
You will loose 90% traffic and you will be happy [mem]
scudsworth · 2h ago
seems like marketing paffle tbh
agentultra · 2h ago
... because they make it unavoidable and default?
bgwalter · 1h ago
Liz Reid staked her career on "AI" working in search. Lo and behold, a blog post by her confirms that "AI" is working.

I've seen many outrageously wrong summaries that were contradicted sometimes by articles on the first page of regular search. Are people happy with the slop? Maybe, but I could see people getting bored by it very quickly. There already is a healthy comment backlash against ChatGPT-generated voice over narratives in YouTube videos.

bediger4000 · 3h ago
Is there any evidence that Google is telling the truth? Because this sounds like bullshit.
inetknght · 2h ago
You think companies would do that? Just go on the internet and tell lies?!
caconym_ · 2h ago
Based on my own usage patterns I don't think this is too implausible. When I do use an LLM chatbot for a "search", I'm almost always gathering initial information to use in one or more traditional search queries to find authoritative sources.

It does kind of contradict my own assumption that most people just take what the chatbot says as gospel and don't look any deeper, but I also generally think it's a bad idea to assume most people are stupid. So maybe there's a bit of a contradiction there.

nsonha · 50m ago
> almost always gathering initial information to use in one or more traditional search queries to find authoritative sources

For me at least with Perplexity, Grok and ChatGPT, all results come back with citations in every paragraph, so I haven't had to do that.

bediger4000 · 2h ago
Thank you. That's different than my use of AI summaries, which is "ignore them". I know that I want definitive info, so I look for deeper info than a summary immediately.

But I also share your assumption about "most people".