The librarian immediately attempts to sell you a vuvuzela

454 rkaveland 326 6/7/2025, 5:04:49 PM kaveland.no ↗

Comments (326)

karel-3d · 1d ago
The money situation is what gives me pause with LLMs.

The amount of money it's burned on this is giant; those companies will need to make so much money to have any possibility of return. The idea is that we will all spend more money on AI that we spend on phones, and we will spend it on those companies only... I don't know, it just doesn't add up.

As a user it's a great free ride though. Maybe there IS such a thing as a free lunch after all!

snickell · 10h ago
What scares me is that the obvious pool of money to fund the deficit in the cost of operating of LLMs comes from the most subtle native advertising imaginable. Can you resist ads where, say, AirBnB pays OpenAI privately to “dope” the o3 hyperspace such that AirBnB is moved imperceptibly closer to tokens like value and authentic??

How much would AirBnB pay for the intelligence everyone gets all their info from having a subtle bias like this? Sliiightly more likely to assume folks will stay in airbnbs vs a hotel when they travel, sliiightly more likely to describe the world in these terms.

How much would companies pay to directly, methodically and indetectably bias “everyone’s most frequent conversant” toward them?

john-h-k · 8h ago
> Can you resist ads where, say, AirBnB pays OpenAI privately to “dope” the o3 hyperspace such that AirBnB is moved imperceptibly closer to tokens like value and authentic??

This would be a very impressive technical feat

JimDabell · 2h ago
Anthropic demoed something similar with Golden Gate Claude a year ago:

https://www.anthropic.com/news/golden-gate-claude

autoexec · 16h ago
> As a user it's a great free ride though. Maybe there IS such a thing as a free lunch after all!

if you consider the massive environmental harm AI has and continues to cause, the people whose work has been stolen to create it, the impacts on workers and salaries, and the abuses AI enables that free lunch starts looking more expensive.

umvi · 14h ago
> the people whose work has been stolen to create it

"Stolen" is kind of a loaded word. It implies the content was for sale and was taken without payment. I don't think anyone would accuse a person of stealing if they purchased GRRM's books, studied the prose and then used the knowledge they gained from studying to write a fanfic in the style of GRRM (or better yet, the final 2 books). What was stolen? "the prose style"? Seems too abstract. (yes, I know the counter argument is "but LLMs can do more quickly and at a much greater scale", and so forth)

I generally want less copyright, not more. I'm imagining a dystopian future where every article on the internet has an implicit huge legal contract you enter into like "you are allowed to read this article with your eyeballs only, possibly you are also allowed to copy/paste snippets with attribution, and I suppose you are allowed to parody it, but you aren't allowed to parody it with certain kinds of computer assistance such as feeding text into an LLM and asking it to mimic my style, and..."

autoexec · 14h ago
AI has been trained on pirated material and that would be very different from someone buying books and reading them and learning from them. Right now it's still up to the courts what counts as infringing but at this point even Disney is accusing AI of violating their copyrights https://www.nytimes.com/2025/06/11/business/media/disney-uni...

AI outputs copyrighted material: https://www.nytimes.com/interactive/2024/01/25/business/ai-i... and they can even be ranked by the extent to which they do it: https://aibusiness.com/responsible-ai/openai-s-gpt-4-is-the-...

AI is getting better at data laundering and hiding evidence of infringement, but ultimately it's collecting and regurgitating copyrighted content.

astrange · 14h ago
> at this point even Disney is accusing AI of violating their copyrights

"even" is odd there, of course Disney is accusing them of violating copyright, that's what Disney does.

> AI is getting better at data laundering and hiding evidence of infringement, but ultimately it's collecting and regurgitating copyrighted content.

That's not the standard for copyright infringement; AI is a transformative use.

Similarly, if you read a book and learn English or facts about the world by doing that, the author of the book doesn't own what you just learned.

kod · 12h ago
Facts aren't copyrightable. Expression is. LLMs reproduce expression from the works they were trained on. The way they are being trained involves making an unlicensed reproduction of works. Both of those are pretty straightforwardly infringement of an exclusive right.

Establishing an affirmative defense that it's transformative fair use would hopefully be an uphill battle, given that it's commercial, using the whole work, and has a detrimental effect on the market for the work.

yencabulator · 11h ago
> AI is a transformative use.

Reproducing a movie still well enough that I honestly wouldn't know which one is the original is transformative?

JimDabell · 2h ago
That’s not “data laundering and hiding evidence of infringement” though.

You’re talking about overt infringement, the GP was talking about covert infringement. It’s difficult to see how something could be covert yet not transformative.

MacsHeadroom · 10h ago
The still is not transformative but the model reproducing it is obviously transformative. Other general purpose tools can be used to infringe and yet are non-infringing as well.
antihipocrat · 8h ago
If I watch a movie, then draw a near perfect likeness of the main character from my very good memory, put it on a tshirt and sell the t-shirt. That is grounds for violation of copyright if the source isn't yet in the public domain (not guaranteed but open to a lawsuit).

If I download all content from a website that has a use policy stating that all content is owned by that website and can't be resold. Then allow my users to query this downloaded data and receive a detailed summary of all related content, and sell that product. Perhaps this is a violation of the use policy.

All of this hasn't been properly tested in the courts yet.. large payments have already been made to Reddit to avoid this, likely because Reddit has the means to fight this in court.. my little blog though, fair game because I can't afford to engage.

yencabulator · 7h ago
For sure, it's rich people playing rules for thee not for me. What's interesting is we'll discover on which side of the can-afford-to-enforce-its-copyright boundary the likes of NYTimes fall.
quantified · 14h ago
Stolen doesn't imply anything is for sale, does it? Most things that are stolen are not for sale.
strangattractor · 11h ago
I think there is case to be made that AI companies are taking the content - providing people with a modified version of that content and not necessarily providing references to the original material.

Much of the content that is created by people is done so to generate revenue. They are denied that revenue when people don't go to their site. One might interpret that as theft. In the case of GRRM's books - I would assumed they were purchased and the author received the revenue from the sale.

GuinansEyebrows · 13h ago
> It implies the content was for sale and was taken without payment

that's literally what happened in innumerable individual cases, though.

skywhopper · 11h ago
Yes, there are ethical differences to an individual doing things by hand, and a corporation funded by billions of investor dollars doing an automated version of that thing at many orders of magnitude in scale.

Also, LLMs don’t just imitate style, they can be made to reproduce certain content near-verbatim in a way that would be a copyright violation if done by a human being.

You can excuse it away if you want with reduction ad absurdum arguments, but the impact is distinctly different, and calls for different parameters.

JimDabell · 2h ago
> Using ChatGPT is not bad for the environment

https://andymasley.substack.com/p/individual-ai-use-is-not-b...

> What’s the carbon footprint of using ChatGPT?

https://www.sustainabilitybynumbers.com/p/carbon-footprint-c...

kulahan · 12h ago
The silver lining on this very dark cloud is that it seems to have renewed interest in nuclear power, though that was inevitable with the coming climate crisis I suppose.
yencabulator · 11h ago
At a time when solar & batteries were just getting great, nah.
jay_kyburz · 9h ago
Haha, I would have thought the reckless cuts of DOGE or willingness of the current US administration to rely on AI for decision making would have driven home exactly why governments can't be trusted to manage nuclear.

It's just too dangerous to leave it in the hands of people who don't believe in science, and value money, power, and ideology more than anything else.

Its happening now, and there is nothing to stop it happening again in future.

baggy_trough · 16h ago
What is this massive environmental harm? That sounds like hyperbole.
ksenzee · 16h ago
They’re restarting coal-fired power plants to run AI datacenters. I don’t know what your personal threshold is for “massive” environmental harm, but that meets mine.
baggy_trough · 16h ago
What's a specific example of that?
svnt · 15h ago
It has been widely reported. Here is an example not of a coal-fired but diesel and gas mobile power sources. If you spend time looking you will have no trouble finding sources.

> Last week, the Environmental Protection Agency (EPA) issued a rule clarification allowing the use of some mobile gas and diesel power sources for data centers. In a statement accompanying the rule, EPA Administrator Lee Zeldin claimed that the Biden administration's focus on addressing climate change had hampered AI development.

> "The Trump administration is taking action to rectify the previous administration's actions to weaken the reliability of the electricity grid and our ability to maintain our leadership on artificial intelligence," Zeldin said. "This is the first, and certainly not the last step, and I look forward to continue working with artificial intelligence and data center companies and utilities to resolve any outstanding challenges and make the U.S. the AI capital of the world."

https://www.newsweek.com/ai-race-fossil-powered-generators-a...

baggy_trough · 15h ago
I spent some time and could not find a specific example. You haven't shown one here, either.
svnt · 15h ago
coryrc · 14h ago
trump stooge activities cannot be blamed on data centers. The relevant technical authorities did not want this.
yencabulator · 11h ago
Oh yes environmentalism is clearly why the data center owners themselves are running generators 24/7.
svnt · 14h ago
What are you talking about? This is literally the only mechanism to allow coal-fired plants to avoid sunsetting on schedule.

Who are the “relevant technical authorities”?

coryrc · 13h ago
"DOE issued the emergency order without a request from the plant owner, transmission provider or grid operator"
svnt · 10h ago
Without getting in to the fact that none of those people operate datacenters, what does that mean to you? You think Trump and Co are doing this as a purely political move to irritate environmentalists?
baggy_trough · 15h ago
Sorry, but they simply are not.
jimstr · 14h ago
svnt · 14h ago
FTA:

> In another move, DOE on Tuesday said it was offering loan guarantees for coal-fired power plant projects, such as upgrading energy infrastructure to restart operations or operate more efficiently or at a higher output.

Please elaborate.

gknoy · 16h ago
Training AI models uses a large amount of energy (according to what I've read / headlines I've seen /etc), and increases water usage. [0] I don't have a lot to offer as proof, merely that this is an idea that I have encountered enough that I was suprised you hadn't heard of it. I did a very cursory bit of googling, so the quality + dodginess distribution is a bit wild, but there appear to be indiustry reports [2, page 20] that support this:

""" [G]lobal data centre electricity use reached 415 TWh in 2024, or 1.5 per cent of global electricity consumption.... While these figures include all types of data centres, the growing subset of data centres focused on AI are particularly energy intensive. AI-focused data centres can consume as much electricity as aluminium smelters but are more geographically concentrated. The rapid expansion of AI is driving a significant surge in global electricity demand, posing new challenges for sustainability. Data centre electricity consumption has been growing at 12 per cent per year since 2017, outpacing total electricity consumption by a factor of four. """

The numbers are about data center power use in total, but AI seems to be one of the bigger driving forces behind that growth, so it seems plausible that there is some harm.

0: https://news.mit.edu/2025/explained-generative-ai-environmen... 1: https://www.itu.int/en/mediacentre/Pages/PR-2025-06-05-green... 2: (cf. page 20) https://www.itu.int/en/ITU-D/Environment/Pages/Publications/...

coryrc · 14h ago
USA uses 21.3 TWh of petroleum per day for transportation. Even if AI was fully responsible for all data center usage (it is not even close) we're quibbling over 20 days of US transportation oil usage, which actually has devastating effects on the environment.

Data centers are already significant users of renewable electricity. They do not contaminate water in any appreciable amount.

astrange · 14h ago
There's an "AI is using all the water" meme online currently (especially on Bluesky, home of anti-AI scolds), which turns out to come from a study that counted hydroelectric power as using water.
baggy_trough · 15h ago
I agree that there is some incremental electricity usage. I do not think it can be characterized fairly as "massive environmental harm".
autoexec · 14h ago
As an example, Ren and his colleagues calculated the emissions from training a large language model, or LLM, at the scale of Meta’s Llama-3.1, an advanced open-weight LLM released by the owner of Facebook in July to compete with leading proprietary models like OpenAI's GPT-4. The study found that producing the electricity to train this model produced an air pollution equivalent of more than 10,000 round trips by car between Los Angeles and New York City. (https://news.ucr.edu/articles/2024/12/09/ais-deadly-air-poll...)

see also:

https://www.techrepublic.com/article/news-ai-data-centers-dr...

https://www.scientificamerican.com/article/a-computer-scient...

ahepp · 14h ago
> The study found that producing the electricity to train this model produced an air pollution equivalent of more than 10,000 round trips by car between Los Angeles and New York City.

I am totally on board with making sure data center energy usage is rational and aligned with climate policy, but "10k trips between LA and NY" doesn't seem like something that is just on its face outrageous to me.

Isn't the goal that these LLMs provide so much utility they're worth the cost? I think it's pretty plausible that efficiency gains from LLMs could add up to 10k cross USA trips worth of air pollution.

Of course this excludes the cost of actually running the model, which I suspect could be far higher

root_axis · 14h ago
> 10,000 round trips by car between Los Angeles and New York City.

That seems like very low impact, especially considering training only happens once. I have to imagine that the ongoing cost of inference is the real energy sink.

autoexec · 13h ago
It doesn't happen only once. It happened once, for one version of one model, but every model (and there are others much larger) has its own cost and that cost is repeated with each version as models are continuously being retrained
sheiyei · 1d ago
<AIs are much better at responding to my intent, and they rarely attempt to sell me anything> YET.
Llamamoe · 22h ago
It's very possible that they never will, that instead the advertising will be so subtle nobody will be able to detect it. Including phrases similar to what products, brands, and their actual ads use in positive contexts, sentences that don't mention but make you think of products, being just slightly likely to bring a brand up than its competitor, and a tiny bit more critical of it, etc.

The goal isn't to have an ad->purchase, the goal is to make sure the purchase is more likely in the long term.

karaterobot · 19h ago
I agree they'd love to do that in theory, and it seems technically feasible. What gives me hope on that front is that marketers and advertisers (let alone the companies that pay them) have never shown the slightest capacity for that level of subtlety. The most sophisticated adtech today, produced by networks of massive data collection and analysis, ultimately just tries to shove as many loud, disruptive ads in your face as possible.

I think if you had this incredible technology that could manipulate language to nudge readers in the softest possible way toward thinking a little bit more about buying some product, so that in aggregate you'd increase sales in a measurable way that nobody would ever notice, it would just quickly just devolve into companies demanding the phrase "BUY MORE REYNOLDS GARBAGE BAGS!!!!!!!!" at least 7 times.

cgriswald · 16h ago
I've noticed this even with product placement in film and television. It's not enough that Super Agent X drives a Palmora Targon, they just have to pan, zoom, tilt the camera to include a shot of the Palmora logo perfectly centered in frame for a few seconds as the car careens off a cliff. I'm only surprised the protagonist isn't also talking about how well the car has served him with its <insert technical details> as he laments its loss while he parachutes to safety.
GLdRH · 11h ago
Community made me buy a honda.
layer8 · 21h ago
I’m pretty sure it would be measurable. How else would advertisers pay for it? And given that advertisers would know about it, it would also be generally known. I wager that enough people and businesses would reject it, if it isn’t outright illegal in the first place.
dwighttk · 20h ago
Maybe OpenAI will just have a range of products they manufacture and push
Llamamoe · 19h ago
Who says it has to be measurable in the output? It could be correlated with searches and purchases of fingerprinted users/demographics, or even just temporally.
autoexec · 16h ago
It it makes them money, businesses will do it even if it is outright illegal
joegibbs · 22h ago
I think subliminal advertising is banned in quite a few countries - not sure about the US - so it might be a problem internationally. I know that here in Australia there was a big scare about it in the mid 2000s, some station was cutting 100ms ads into shows. Not sure about the efficacy of it though, I’m sure it would be better if you watched a whole ad.
Llamamoe · 19h ago
How are you going to prove that a few thousand weights among billions on a privately owned server were actually amplified or ablated post-training?
sumtechguy · 18h ago
Edward Bernays created a method to get people to buy/hate/like things. It is used all the time. Its manipulative and shockingly effective. You will never see it coming. It is used all the time on everyone for any plethora of subjects. Subliminal advertising while mildly effective can not hold a candle to the Edward Barnays method of selling.
rustcleaner · 17h ago
Meme magic, is what I call it. Like how the stage magician moves whole islands for the crowd, TV magicians move whole populations by changing their internal representative models of reality.

This is the machine that magicians program for: https://www.youtube.com/watch?v=wo_e0EvEZn8

JambalayaJimbo · 19h ago
TV shows and movies often have subliminal ads. A character will use a specific type of phone for example.
inejge · 19h ago
That's not subliminal, that's plain old product placement.
usefulcat · 17h ago
> It's very possible that they never will, that instead the advertising will be so subtle nobody will be able to detect it.

I was going to write a rebuttal to this, about how more subtle forms of advertising are likely not very effective, and then I remembered subliminal advertising.

It's largely been banned (I think), but probably only because it's relatively easy to define and very easy to identify. In the case of LLMs, defining what they shouldn't be allowed to do "subliminally" will be a lot harder, and identifying it could be all but impossible without inside knowledge.

jandrese · 16h ago
Yeah, the laws against subliminal advertising were written in a rather knee jerk reaction to the creepiness of the entire concept instead of as a result of careful study and analysis.

How effective is it? We don't know, but there is nothing of potential value to lose so nobody really cared. Just ban it and move on.

sandy_coyote · 18h ago
This is credible simply because this is how advertising works. Product placement, free products for celebs, modern life awash in images that make us desire things.
Lu2025 · 22h ago
This gave me creeps. Modern tech is good an opening up new dimensions of dystopian hell.
manosyja · 21h ago
Don’t worry, Doomguy will seal the inter-dimensional portal and kill the icon of sin
Melonai · 19h ago
I don't think Sam Altman enjoys being called that. :)
yencabulator · 9h ago
That still sounds like AIs attempting to sell you something, to me.
ToucanLoucan · 15h ago
> It's very possible that they never will

Oh come on.

Genuinely.

Come on.

Look at every single tech innovation of the last 20 years and say that again.

pcthrowaway · 1d ago
Remember, Google also didn't have ads interspersed with their search results for over 1̶2̶ 2 years
Cthulhu_ · 21h ago
And they were actually praised when they did start doing ads because they weren't as obtrusive as the existing heavy duty in-your-face Flash animations and they were relevant to a user.

It quickly turned Google into the biggest / most valuable internet company of all time ever, and it still wasn't enough for them.

I've had adblockers running for as long as I can remember so I'm blisfully unaware of how bad it is now... mostly, I don't have adblockers on my phone and some pages are unusable.

throwaway290 · 20h ago
When ads are too many blame website owners, as far as I know Google does not hijack websites to put more ads.

Ads done right is the least bad way of supporting free stuff for people who don't want to pay the cost. But people with ubo punish all sites regardless of whether they do ads nicely or not.

You are right now writing in a thread about upcming future where promotion is embedded in the content so that content itself is one big ad disguised as whatever. Do you really think it's a better alternative to clearly delimited and unmistakeable ads?

jkaptur · 23h ago
Google was founded in 1998 and you could buy ads on the search results page in 2000. https://googlepress.blogspot.com/2000/10/google-launches-sel...
JohnMakin · 22h ago
It’s crazy on the web when you point out that google or google products used to be much better in the past someone will come out of nowhere to tell you it’s always been that way

what is this instinct? anyone that’s over the age of 25 would know

krisoft · 21h ago
> what is this instinct?

"The rules were you guys weren't going to fact check."

The instinct is about pointing out factual inaccuracies. What they wrote is either correct, or not. If it is not, and someone knows better they can and should point that out.

If you, or some other commenter, have a fuzzy feeling that google is worse than it used to be you are free to write that. You are perfectly entitled to that opinion. But you can't just make up false statements and expect to be unchallenged and unchallengeable on it.

JohnMakin · 20h ago
Other than the fact the parent comment to this subthread is posting a literal factual innacuracy regarding the history of ads on google - It’s not just one guy’s “fuzzy feeling.” It’s been written about in so many thousands of words over the last two years and is the general sentiment across the tech space. It’s sort of the major reason big companies like chatGPT, and smaller ones like Kagi are trying to swoop in and fill this void. it’s fairly obvious to anyone paying attention.

You can sealion with posts like this all you want but every time someone counters a post like this with ample evidence it gets group downvoted or ignored. You are also making an assertion that you’re free to back with evidence, that google and google products are not noticeably worse than 10 years ago.

here’s one study that says yes, it is bad:

https://downloads.webis.de/publications/papers/bevendorff_20...

Since we don’t have a time machine and can’t study the google of 2015 we have to rely on collective memory, don’t we? You proclaiming “it’s always been this way” and saying any assertion otherwise is false is an absolutely unfalsifisble statement. As I said, anyone over 25 knows.

Besides perusing the wealth of writing about this the last two years or so, in which the tech world at large has lamented at how bad search specifically has gotten - we also see market trends where people are increasingly seeking tools like chatGPT and LLM’s as a search replacement. Surely you, a thinking individual, could come to some pretty obvious conclusions as to why that might be, which is that google search has got a lot worse. The language models well known to make up stuff and people still are preferring them because search is somehow even less reliable and definitely more exhausting, and it was not always this way. If it was always this way, why are so many people turning to other tools?

krisoft · 9h ago
> Other than the fact the parent comment to this subthread is posting a literal factual innacuracy regarding the history of ads on google

Sounds like it should be very easy to counter their argument then.

For my education could you tell me which part of their message is inaccurate? The “Google was founded in 1998” or the “and you could buy ads on the search results page in 2000.” part?

> You are also making an assertion that you’re free to back with evidence, that google and google products are not noticeably worse than 10 years ago.

I did not make such an assertion. Where in my comment do you think i’m making that assertion?

> You proclaiming “it’s always been this way”

I’m sorry but who are you quoting? Did you perhaps misclicked which comment you wanted to respond to?

thaumasiotes · 20h ago
Except that jkaptur is the one making up false statements, and then providing "citations" that contradict him. I don't think an instinct to point out inaccuracies can explain that. There would have to be inaccuracies to point out first.
JohnMakin · 20h ago
If you believe stuff like this isn’t actual astroturfing, you must face that from somewhere there seems to exist a deeply ingrained belief from a subset of extremely vocal and argumentative people that Google is amazing and if it isnt well that’s just how the web is now (ignore the google man behind the curtain that created the modern web in the first place) and if it’s not that well, it’s always been this way (even if it hasn’t).

There is a very strong stance on this site against talking about astroturfing, and I understand it. But for the life of me, I cannot figure out where this general type of sentiment originates. I don’t know any google enthusiasts and am not sure I’ve ever met one. It’s a fairly uncontroversial take on this website and in the tech world that google search has worsened (the degree of which is debateable). Coming out and saying boldly “no it isn’t, you’re lying” is just crazy weird to me and again I’m very curious where that sentiment comes from.

see some of the sibling and aunt/uncle comments in this thread to get at a little of what I’m talking about.

autoexec · 16h ago
I was a google fan back when they first started and were just a search engine. Search engines like Yahoo and excite became massively bloated and ad-filled while google was clean and fast.

I wasn't a fan for very long. Google got creepy fast, and at this point their search is becoming useless, but for a short time I really thought that Google is amazing and I was an enthusiast.

cgriswald · 16h ago
All I see here is someone making a claim and someone else making a different claim. They may have erroneously intended the claim in opposition, either missing or interpreting differently the 'interspersed' qualifier. Or, alternatively, they may believe when any ads appeared is more meaningful in the context of this discussion.

I think Google search has gone downhill tremendously to the point of near uselessness and have been a Kagi subscriber for awhile, but I don't see astroturf in this instance. Do you have other examples?

thaumasiotes · 19h ago
There was a pretty insane comment in this genre a month ago: https://news.ycombinator.com/item?id=43951164

> If Google [had been] broken up 20 years ago [...] [e]veryone would still be paying for email.

Some people don't have the foggiest idea what they're talking about. But I don't really see that as suggesting they're part of an organized campaign.

krisoft · 9h ago
> Except that jkaptur is the one making up false statements, and then providing "citations" that contradict him.

I believe I have covered that case in my comment. Let me quote the relevant part here for you: “What they wrote is either correct, or not. If it is not, and someone knows better they can and should point that out.”

That being said could you help me by pointing out the inaccuracy in jkaptur’s comment? It seems fairly simple and as far as I can see well supported by the source.

h2zizzle · 22h ago
Many people who post here are, were, or would like to be Googlers. Maybe not so much astroturfing ao much as a kind of corporate hasbara (though maybe both).
thaumasiotes · 18h ago
> Maybe not so much astroturfing as much as a kind of corporate hasbara

What's the difference? In astroturfing, someone pays people to form an organization, claim to have no external support, and do some kind of activism.

In hasbara, the government of Israel pays people to not form an organization, claim to have no external support, and do various kinds of pro-Israel and pro-Jew activism. This looks like astroturfing with the major vulnerability of the no-external-support claim shored up.

h2zizzle · 16h ago
Fair. The main difference is that people here don't like it when you call it astroturfing.
Xss3 · 22h ago
But they weren't interspersed they were in a sidebar.
thaumasiotes · 22h ago
Ads on the page aren't the same thing as ads interspersed with the results. The ads used to be in a sidebar, or in an inset with a different background color that appeared above all results.

Read your own link:

> For example, entering the query "buy domain" into the search box on Google’s home page produces search results and an AdWords text advertisement that appears to the right of Google’s search results

> Google’s quick-loading AdWords text ads appear to the right of the Google search results and are highlighted as sponsored links, clearly separate from the search results. Google’s premium sponsorship ads will continue to appear at the top of the search results page.

kristianc · 23h ago
That wasn’t out of benevolence, that’s because they hadn’t discovered the ads business model yet. The genie is well and truly out of the bottle now.
schmidtleonard · 17h ago
They always knew.

> The goals of the advertising business model do not always correspond to providing quality search to users.

- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine

http://infolab.stanford.edu/~backrub/google.html

wizzwizz4 · 22h ago
They're on record as talking about how Google has better results because it doesn't have ads.
passwordoops · 22h ago
At that time and it's what got them the market share. Once they achieved monopoly status "Don't be Evil" was quietly replaced by "You call that an acceptable margin?!"
d_phase · 22h ago
You obviously haven't been A/B tested yet. I got very obvious advertisements in a super simple question I asked last week to ChatGPT. The question was "When was the last year it was really smokey in Canada" it answered in one paragraph, then gave me about 6 paragraphs of ads for air purifiers, masks etc.

I'd guess we're only 6-12 months out from a full advertisement takeover.

bandoti · 20h ago
I think it’s time folks dust off their library cards :)

Or support an open source AI model.

I stopped using ChatGPT when it started littering my conversation with emojis. It acts like one of those overzealous kids on Barney.

sheiyei · 22h ago
It was a quote, which I failed to format in this app I use.
Marazan · 21h ago
The simpler explanation is that ChatGPT is trained on webpages that have been SEO'd to death.

So you are just getting SEO'd pages (i.e ads) regurgitated to you.

bombcar · 20h ago
This is of course, absolutely, master sahib boss man.

Have you considered buying a ChatGPT filter/scrubber to clean your results? Only $9.99 a month! Not available in all areas, not legal in most of the world.

;) and </s>

jameshart · 18h ago
The race right now is to get your product embedded as highly recommendable in the training data sets the AIs are learning from.
conception · 19h ago
jiveturkey · 18h ago
I would wager that the most prevalent use of AI today is to sell you ads. Whether through market analysis, campaign analysis, content optimization, and content generation.
reverendsteveii · 18h ago
thank you. the idea that this will be the one thing that doesn't get enshittified when it's being so heavily pushed by the people who enshittified everything else is frankly absurd
marcosdumay · 18h ago
Yes, the question everybody awake is asking is how long until all the LLM corporate initiatives die? Because as useful as those things can be, they just can't do enough to justify that cost.

But there are free (to copy) ones, and smaller ones. And while those were built from the large, expensive models, it's not clear if people won't find a way to keep them sustainable. We have at minimum gained a huge body of knowledge on "how to talk like people" that will stay there forever for researchers to use.

troyvit · 18h ago
> We have at minimum gained a huge body of knowledge on "how to talk like people" that will stay there forever for researchers to use.

This is spot on. I think we'll be able to capitalize on other talents of "AI" once we recognize the big shift is done happening. It's like five years after the Louisiana Purchase: we have a bunch of new resources but we've barely catalogued them, let alone begun to exploit them.

> how long until all the LLM corporate initiatives die?

Sooner than I personally thought, and I place a lot of that with Apple's. They've led the way in hardware that supports LLMs, and I believe (hope?) they'll eventually wipe out most hosted chat-based products, leaving the corporate players to build APIs and embedded products for search, tech support, images, etc. The massive amounts of capital going into OpenAI, Anthropic, etc., will ebb as consumer demand falls.

I hope for this because the question I keep asking is, how can our energy infrastructure sustain the huge demand AI companies have without pushing us even further into a climate catastrophe?

chasd00 · 12h ago
> This is spot on. I think we'll be able to capitalize on other talents of "AI" once we recognize the big shift is done happening. It's like five years after the Louisiana Purchase: we have a bunch of new resources but we've barely catalogued them, let alone begun to exploit them.

one thing about LLMs used as a replacement for search is they have to be continually retrained or else they become stale. Lets say a hard recession hits and all the AI companies go out of business but we're left with all these models on huggingface that can still be used. Then, a new programming language hits the scene and it's a massive hit, how will LLMs be able to autocomplete and add dependencies for a language they've never seen before? Maybe an analogy could be asking an LLM to translate a written language you make up on the spot to English/other language.

crazygringo · 16h ago
> The amount of money it's burned on this is giant

It's big, but it's honestly not that big. Most importantly, costs will quickly come down as we realize the limits of the models, the algorithms are optimized and even more-dedicated hardware is built. There's no reason to think it isn't sustainable, it will add up just fine.

But yes, it will attract a ton of advertising, the same curve every service goes through, like Google Search, YouTube, Amazon, etc. Still, just like Google and Amazon (subtly) label sponsored results, I expect LLM's to do the same. I don't think ads will be built into the main replies, because people will quickly lose trust in the results. Rather they'll be fed into a separate prompt that runs alongside the main text, or interrupts it, the way ads currently do, and with little labels indicating paid content. But the ads will likely be LLM-generated.

Ferret7446 · 1h ago
Actually, I feel like most of the money will come from enterprises. Every company will need an LLM subscription to stay competitive. I think it's possible that consumers will get a free ride with a small amount of quota without ads.
exceptione · 14h ago

  > I don't think ads will be built into the main replies, 
  > because people will quickly lose trust in the results. 

The 'best' ads will be those the public doesn't recognize. Surf the internet without an ad blocker and you will die from a heart attack. This is a matter of conditioning users. It will take some time. Case in point: people already give up on privacy because "Google knows about everything already", which reflects a normalization of abuse, as we started from trust and norms ("don't be evil").

So, can they? yes. Will they? yes.

yencabulator · 9h ago
Product placement in movies and such has been a thing for a long time now. At best you can hope that your prompts will be classified as factual-vs-entertainment, and the product placement will only happen in the entertainment ones.
ToucanLoucan · 15h ago
> the same curve every service goes through

This is honestly why I struggle to get excited for anything in our industry anymore. Whatever it is it just becomes yet another fucking vector for ad people to shove yet more disposable shit in front of me and jingle it like car keys to see if I'll pull out a credit card.

The exception being the Steam Deck, though one could argue it's just a massive loss-leader for Steam itself and thus game sales (though I don't think that would hold up to scrutiny, it's pretty costly and it's not like Valve was hurting for business but anyway) but yeah. LLMs will absolutely do the exact same, and Google's now fully given up on making search even decent, replacing it with shit AI nobody asked for that will do product placements any day now, I would bet a LOT of money on it.

jonplackett · 22h ago
They’re all investing with the assumption they can be the ‘winner’ and take all the spoils.

Maybe nvidia can be a winner selling shovels but it seems like everyone else will just be fighting each other in the massive pit they dug.

philipwhiuk · 20h ago
NVIDIA is already a winner selling shovels.

They don't need a winner, they want the race to continue as long as possible.

jraph · 20h ago
> Maybe there IS such a thing as a free lunch after all!

A free lunch that costs our environment though, which is a big caveat :-)

tartoran · 19h ago
The free lunch creates a lot of dependency on AI so when the lunch isn't free anymore it will bite hard.
Ferret7446 · 1h ago
Same as all technology. You missed the deadline to file that complaint by more than a couple hundred thousand years.
returningfory2 · 18h ago
The idea that LLMs are uniquely bad for the environment has been debunked. https://andymasley.substack.com/p/individual-ai-use-is-not-b...
jraph · 17h ago
I've already seen this.

I'm not convinced. This article focuses on individual use and how inconsequential it is, but it seems like to me it dismisses the training part that it does mention a bit too fast to my taste.

> it’s a one-time cost

No, it's not. AI company constantly train new models and that's where the billions of dollars they get go into. It's only logical: they try to keep improving. What's more, the day you stop training new models, the existing models will "rot": they will keep working, but on old data, they won't be fresh anymore. the training will continue, constantly.

An awful quantity of hardware and resources are being monopolized where they could be allocated to something worthier, or just not allocated at all.

> Individuals using LLMs like ChatGPT, Claude, and Gemini collectively only account for about 3% of AI’s total energy use after amortizing the cost of training.

Yeah, we agree, running queries is comparatively cheap (still 10 times more than a regular search query though, if I'm to believe this article (and I have no reason not to)) after amortizing the cost of training. But there's no after, as we've seen.

As long as these companies are burning billions of dollars, they are burning some correlated amount of CO2.

As an individual, I don't want to signal to these companies, through my use of their LLMs, that they should keep going like this.

And as AI is more and more pervasive, we are going to start relying on it very hard, and we are also going to train models on everything, everywhere (chat messages, (video) calls, etc). The training is far from being a one shot activity and it's only going to keep increasing as long as there are rich believers willing to throw shit-tons of money into this.

Now, assuming these AIs do a good job of providing accurate answers that you don't have to spend more time on proofreading / double checking (which I'm not sure they always do), we are unfortunately not replacing the time we won by nothing. We are still in a growth economy, the time that is freed will be used to produce even more garbage, at an even faster rate.

(I don't like that last argument very much though, I'm not for keeping people busy at inefficient tasks just because, but this unfortunately needs to be taken in account - and that's, as a software developer, a harsh reality that also applies to my day to day job. As a software developer, my job is to essentially automatize tasks for people so they can have more free time because now the computers can do their work a bit more. But as a species, we've not increased our free time. We've just made it more fast-paced and stressful)

The article also mentions that there are other things to look into to improve things related to climate change, but the argument goes both ways: fighting against power hungry LLMs don't prevent you from addressing other causes.

scrollaway · 19h ago
Maybe.

But to be honest, optimizing monstrously slow processes that cost weeks of human labour by automating them, that saves a ton of energy as well. It’s not zero sum, as the humans spend that energy elsewhere, but ideally they spend it on more productive things.

This calculus can very quickly offset whatever energy is wasted generating cartoon images of vuvuzelas.

jraph · 17h ago
> optimizing monstrously slow processes that cost weeks of human labour by automating them, that saves a ton of energy as well

Yes, I do agree with this. However, that's only good as long as there wasn't a better way of optimizing them. Assuming we'd not be better off getting rid of those costly process altogether.

> ideally they spend it on more productive things

Same gotcha as mentioned in my other comment: "productive" in our growth economy often means "damaging to the environment", because we are collectively spending a lot of our time producing garbage and that's not something we should really optimize. Most of us work a fixed amount of hours so it's not like we are doing ourselves any favor by optimizing time in the end.

In another system, I wouldn't say. I'm generally for freeing up time for us so we can have better lives.

sheiyei · 16h ago
Humans will have the privilege of using that time to take out the trash and be taxi drivers for drug users
Xss3 · 22h ago
Ai is in its early youtube phase. Everyone loves that it's free and adfree and that its algorithm's primary purpose is to serve relevant content not profitable content, everyone knows it can't stay that way, and we are all waiting for the enshittification to kick in on the march to profitability.

The question is, will AI chat or search ever be profitable? What enshittification will happen on that road? Will AIs be interrupting conversations to espouse their love of nordvpn or raid shadow legends?

yuck39 · 16h ago
Many of the traditional SEO players are now figuring out how to game the system to get their customers to show up more frequently in LLM responses.

Once the pressure to turn a profit is high enough the big players surely won't just leave that money on the table.

The scary part is that even if we end up paying for "ad-free" LLM services how do we really know if it is ad-free? Traditional services are (usually) pretty clear on what is an ad and what isn't. I wouldn't necessarily know if raid shadow legends really is the greatest game of all time or if the model had been tuned to say that it is.

tartoran · 18h ago
I'm aware this is going to happen. but you don't think offline solutions will be more prevalent by the time openai jacks up the costs? These companies have no real moats unless they start doing something social so they have a network of captive audience or something like that.
Exoristos · 11h ago
Doesn't it seem suspect that a product with such massive investment is made available for little or no cost? Either it's garbage (you don't get AutoCAD for free: you get meme generators), and investors are getting soaked; or it's digital gold, and consumers are being lured in for unobvious exploitation.
thih9 · 13h ago
Speculation:

AI companies want the cultural shift, i.e. get everyone used to having their data, art, work, etc, turned into models. Plus they want the PR, i.e. AI agents to be seen as helpful, friendly and genuinely useful. They want this to happen fast and before legislators react too. Releasing for free seems safe and efficient.

ASalazarMX · 13h ago
And this is already happening. I have a few acquaintances that will ask ChatGPT thinks they could readily find on Wikipedia, or use it as a shortcut for mental effort. They've become dependent, even addicted, to this patient butler that can work and think for them; who cares if it makes some mistakes? They'll just ask it more.

Once enough people become like this, they will gladly pay to keep it, they'll consider it a basic necessity.

adverbly · 14h ago
> The idea is that we will all spend more money on AI that we spend on phones

AI is not a consumer product.

Businesses will pay for AI. They will use it for whatever they are building. We will buy what those businesses build.

The medium term here is that AI is going to become part of the value chain. It's gonna be like stripe or insurance or labor.

Mistletoe · 1d ago
I’m constantly confused why Google wants me to use Gemini so much. Why would they want me to use an AI that doesn’t show ads, isn’t their search, and costs them money to generate the responses? The instant we see ads in LLMs we will stop using them, it will be as abrupt and as annoying as the vuvuzela example because using an LLM is a very intimate interaction. It will feel much more jarring when your intimate chat friend becomes a used car salesman to you. I get that AI=stock go up right now but it seems like they are poisoning their own well deeply down this path.
lipowitz · 1d ago
Google thinks it is going to charge a monthly fee for Gemini so they are basically correcting their model to be user paid in the switch from search to LLM.

They are also trying to reduce workers in search, etc, somewhat through the trained LLMs so the other idea is that they have lower costs per user.

I don't think many people will pay monthly fees or will want to pay them for each platform they use which is why they all tend to do so many questionable integration attempts to try to get users to not want to use a separate LLM of their choice in a browser instead.

k__ · 1d ago
This.

Gemini Pro requires a monthly subscription.

Seems like a pretty straightforward business model.

Mistletoe · 10h ago
How many would really pay though? I certainly wouldn't. Would you pay for Google search?
lipowitz · 9h ago
Their ads are very convincing, I'd pay up to 40¢ a month for Gemini to tell me how to recover with a recipe for tomato sauce I accidentally put sugar in, though only in months where I inexplicably add sugar to things.
aleph_minus_one · 1d ago
> I’m constantly confused why Google wants me to use Gemini so much. Why would they want me to use an AI that doesn’t show ads, isn’t their search, and costs them money to generate the responses? The instant we see ads in LLMs we will stop using them, it will be as abrupt and as annoying as the vuvuzela example because using an LLM is a very intimate interaction.

I think with the last sub-clause (that I emphasized), you answered your question: because the conversation is more intimate, Google learns more about your "true interests", be it to make advertising to you more targeted, or for more sinister purposes.

pbhjpbhj · 1d ago
If you're going to use an LLM instead of their search better to keep you on their estate. Presumably they're using responses to tune a personalisation layer, or in some way, they can use to modify their advertising algos?
Xenoamorphous · 20h ago
> The instant we see ads in LLMs we will stop using them

People didn’t stop using Google Search, Facebook, Instagram, YouTube, TikTok and a myriad other services and products (and things like TV before those) because they got ads.

tartoran · 17h ago
It's not just ads but plenty of dark patterns. At some point as a user you no longer have much control and no longer get what you were looking for and decide to leave. But the slow boiling leaves a lot of frogs in the pan, it will probably be a slow decay before death. To me browsing the net is unusable without ad blockers but there are many users out there who don't know what ad blockers are.
dragonwriter · 1d ago
> The instant we see ads in LLMs we will stop using them

But what if you don't see distinct ads? LLM advertising can just be paid-for bias in the generated content.

lblume · 1d ago
Moreover, the bias could be arbitrarily subtle. This is something that really worries me about black box AI systems.
bjt · 16h ago
What marketing department on earth is going to pay for an ad campaign so subtle that they can't even tell if it's running?

Ad budgets aren't bottomless. People making decisions about them have a lot of options for where to spend. They want provable attribution so they can tell which channel is giving them the most bang for the buck. If that exists, then the ads will be discernible.

MrGilbert · 23h ago
I‘d argue that, at some point, this behavior will surface enough that it gets interesting for government to take a look at it. At least in my country, ads need to be labeled as such - I don’t think this is going to change.
Lu2025 · 22h ago
The entire business model of tech is to fly under the radar of regulations. They can't even force the bastards to stop tormenting us with cookie pop-ups, i seriously doubt the government will do much.
9dev · 23h ago
Google has this pesky problem of carrying almost all their eggs in one basket: The moment their ad business doesn’t work anymore, the behemoth that Google is will come down to its knees. But due to the way the company is structured, they need search to drive users to the ads, and it’s all one big mess of entangled revenue streams that can’t be touched for fear of breaking something.

Now if AI turns out to be the next big thing, they can steer differently next time, sell subscriptions, and avoid all that entanglement with multi-sided markets and layered revenue strategies. At least that’s my take.

autoexec · 16h ago
> they need search to drive users to the ads

Why? The websites we visit can still be infested with google's ads, and so can our gmail accounts, and so can the youtube videos we watch, and they can push ads directly onto our cell phones 24/7. Google has plenty of ways to force ads into your life.

Google used to need search in order to build extensive dossiers on everyone. It told them what people were looking for online. What they were interested in. Now Google has their cell phones, their browser, and their DNS servers doing that for them. Most people are handing all of their browsing history to Google. Google doesn't need search, which is why it's been allowed to atrophy into uselessness.

h2zizzle · 22h ago
>The moment their ad business doesn’t work anymore, the behemoth that Google is will come down to its knees.

Which makes it all the weirder that they seem to be intentionally sabotaging it by nerfing Search into unusability.

Marazan · 20h ago
The slow destruction of Google search is and will continue to be revenue positive for Google right up until the point where it isn't.

Eventually they will break the "trust thermocline" in their search results and that will blow it up but on the way they'll keep making more and more money from every damaging change they make.

woooooo · 19h ago
Crucially, the people driving each incremental change get to celebrate the extra revenue they drove and get promoted for it.
fragmede · 14h ago
This was true, and is why Google didn't come out with ChatGPT themselves 5 years earlier. But since OpenAI's come to take their lunch, they understand this predicament and are pivoting.
9dev · 2h ago
Until that pivot isn't even close to complete, this continues to be true. Take ads away now, and Google ceases to exist tomorrow. It's going to take a long time until that fundamentally changes.
ZeroGravitas · 1d ago
I saw my first AI ad yesterday, in Microsoft Copilot, recommending a Jetbrains product.
fragmede · 1d ago
> The instant we see ads in LLMs we will stop using them

You mean like how the instant we see ads on YouTube we will stop using it?

sundarurfriend · 22h ago
Or like how the instant Netflix disables password sharing, we will stop subscribing to it.
dismalaf · 23h ago
Because Gemini has paid tiers. Also to prevent users from going to other providers.
usrusr · 19h ago
There are many phones that won't be bought when businesses can substitute some of their workforce with LLM. That market is not about B2C at all.
bondarchuk · 23h ago
"It has cost a lot of money and therefore they will try very hard to make it back". I don't think it really makes sense. Companies will always try to make the maximum amount of money anyway.
jerf · 21h ago
Yes, but there are definitely levels of desperation. Companies confident that "somehow, it'll all work out" are quite different than "oh crap the money is gone we need revenue now". We're probably entering that transition sometimes this year.
HappMacDonald · 20h ago
Google is certainly showing unusual signs of revenue wringing desperation at the end-user level in the past year or so.
h2zizzle · 22h ago
No? Some companies are mission-driven. Some are ran by people with ethical scruples, and/or who will forego short-term profit for long-term brand image. Some (allegedly) have been infiltrated or taken over by bad actors who purposely sabotage the company's profitability. Some are private and simply do as they wish.

I wish we wouldn't mindlessly repeat these platitudes. Try and falsify your statements before posting.

LeifCarrotson · 19h ago
Some are or at least start that way, but they exist in a competitive market.

In our current regulatory and economic environment, it appears that mission-driven, long-term oriented, ethical companies are typically out-competed by finance-driven, short-term oriented, greedy companies.

The article describes the struggle of using a search engine in 2025. Which is to say, using Google in 2025. Search engines benefit greatly from huge economies of scale, and most websites are optimized for Google SEO and for their ad network. Sure, the folks at DuckDuckGo (my search engine) or Kagi appear to be your good sort of company, but the revenue and popularity of those companies is a rounding error in comparison to Alphabet, Inc. They can't afford the crawlers and infrastructure of the big finance-oriented players, they can't convince most websites to optimize for their engine, and most people don't even know they exist.

Sure, there's a handful of people running the equivalent of a small-town grocery with local farm-sourced produce and hand-selected general goods as a passion project, working long hours and slowly chewing through their savings. And there are a handful of people who feel that the existence of such a place is important, and shop there out of principles in spite of the incentives and penalties associated with that behavior. But most of the country is overrun by Dollar Generals and Wal Marts.

bell-cot · 22h ago
Obviously true.

OTOH, there are far too many people who desperately want to believe in cool new stuff really being free, without any "gotcha" down the line.

Cthulhu_ · 21h ago
What I'm seeing at work now is the companies selling AI stuff to companies - mostly Microsoft in my neck of the woods. For me as an employee, using copilot is a trivial thing, not my wallet. But just like with AWS where a developer doesn't really need to worry about how many machines their merge request starts up and lets run, the bills will start to creep up to the companies.

For now Copilot is a fixed $20 / month / person, but it's only a matter of time before it becomes metered, or the advanced models cost more credits. This is also why they're pushing for agents, because a single query is cool and all, how much it costs in compute is reasonably predictable, but an agent can do a lot of interesting things and do 100x the usage of a single query, and put 100x the charge on the corporate credit card.

It'll probably have a chilling effect, with companies being like "ok maybe let's tone down a bit on the AI usage", just like how they hire consultants to bring down their runaway AWS costs.

immibis · 1d ago
More hopeful explanation: they'll never make their money back and this is either part of the great rebalancing or great collapse of the economic forces that be.
strangattractor · 12h ago
> As a user it's a great free ride though. Maybe there IS such a thing as a free lunch after all!

I remember when search was a free ride. The articles that I found in searches where relevant and there was no wordy boiler plate AI content specifically designed to get me to see all the advertising on the page. There is no free ride - AI will accelerate the enshittification of the Web by orders of magnitude. Barriers to garbage content generation are rapidly approaching 0.

closewith · 1d ago
> The idea is that we will all spend more money on AI that we spend on phones, and we will spend it on those companies only... I don't know, it just doesn't add up.

My companies currently spend more on AI than phones - hardware and subscriptions. It's now the second highest expense to salaries and director's remuneration.

Mizza · 1d ago
What are you doing with it?
multjoy · 1d ago
Anything that could possibly justify the expense, I suspect, whether it is the right thing or not.
closewith · 1d ago
Isn't that true of all software development?

But we are lily-white both legally and ethically. One of the perks to a lifestyle business beholden to no investors.

closewith · 1d ago
We make compliance software for EU enterprises that helps them remain GDPR compliant when writing software. We use agentic LLMs in our client-facing software development, which is largely industry-specific CRUD apps that use our existing APIs.
spaceguillotine · 12h ago
this is a perfect example of the VC churn, they hope to get unattainable and unsustainable results and don't care what happens to anyone that was doing it sustainably before they came around.

The internet has become total garbage now all because a few men wanted to make a bunch of money by making silicon do the thinking for them.

AI is the quickest route to ruin and ending up with humans like in Idiocracy devoid of critical thinking, and the output of LLMs is so bad to read, students are just turning in the worst papers using LLMs and learning nothing.

At first my school banned use of them but then Microsoft tipped their hand because they donate a lot of money and now everyone is allowed to use AI and they got rid of the requirement to use MLA Citations so everything turned into slop.

babypuncher · 15h ago
We went through this with all the Silicon Valley "disruptors" in the '00s and '10s. It's fun while they're focused solely on burning VC money to build a massive userbase, but as soon as they decide it's time to start actually making money the deal gets a lot less sweet. Only by that time, most of the competitors are gone and you don't have much choice.
Lu2025 · 22h ago
Regarding money, they label a lot of expenses as R&D and write them off taxes. Taxpayers foot the bill to some extent.
andruby · 21h ago
That's not how accounting works. Companies pay salaries and pay for hardware. They don't make profit so they don't pay taxes.

By labeling the salaries as R&D assets and amortizing that over 5 years (instead of taking it completely in the first year), they're more likely to make "accounting" profit and pay taxes in the early years.

Those legislative changes will likely move forward the taxes being paid.

But to your point: not paying taxes because a company is investing doesn't mean taxpayers are footing the bill. It does mean the company isn't contributing to paying taxes while it is in "growth investing" mode.

willvarfar · 1d ago
My dream solution:

The EU creates an institution for public knowledge, a kind of library+tech solution. It probably funds classic libraries in member countries, but it also invests in tech. It dovetails nicely into a big push to get science to thrive in the EU etc.

The tech part makes a in-the-public-interest search engine and AI.

The techies are incentivised to try and whack-a-mole the classic SEO. E.g. they might spot pages that regurgitate, they might downscore sites that are ad-driven, they might upscore obvious sources of truth for things like government, they might downscore pages whose content changes too much etc.

And the AI part is not for product placement sale.

This would bring in a golden age of enlightenment, perhaps for - say - 20 years or so, before the inevitable erosion of base mission.

And all the strong data science types would want to work for it!

arn3n · 23h ago
God, I’d love to work for something like this.

The closest equivalent thing we have today is (in my mind) places like the Apache Foundation or LetsEncrypt, places that run huge chunks of open source software or critical internet structure. An “Apache for search” would be great.

tokai · 20h ago
No, the closest equivalent are the various national libraries.
eps · 11h ago
I have a friend who worked for the Apache Foundation. From what he described, it was a burecracy nightmare and advanced office politics in equal measures. He left because of that.
graemep · 1d ago
> The tech part makes a in-the-public-interest search engine and AI.

Which will be provided by a private sector contractor and it goes to the lowest bidder who offsets their costs with advertising.

willvarfar · 22h ago
hey! it's my dream, and in my dream world it would be commissioned from academia :)
beAbU · 18h ago
I hope the academia of your dreams create code that is not like real-world academia :)
wood_spirit · 14h ago
Academia has the full gamut of software quality. A good example what springs to mind is Wirths operating systems work at ETH.

In the same way that the early google made it great to put top minds on mundane problems, let’s imagine that an institute can make a knowledge-first search engine and AI. It’s about aligning incentives.

ranyume · 16h ago
You make it sound like the techies will be their own boss. Sorry, but politicians are in charge.
svnt · 15h ago
The democratically-elected politicians are in charge of creating an environment where capitalists can capitalist without consuming everything in the process.

This is pretty much the best arrangement we have come up with so far in human civilization. You seem to be suggesting a tragedy of the commons is instead the ideal we should strive for.

SoftTalker · 14h ago
Sounds like Wikipedia, except for the EU ownership.
renewiltord · 12h ago
I have to say the best part about this fanfic is the choice of hero. It's like having Karl Marx be the protagonist of Atlas Shrugged. Highly entertaining.
boothby · 1d ago
> These days, I find that I am using multiple search engines and often resort to using an LLM to help me find content.

For a few months, I've been wondering: how long until advertisers get their grubby meathooks into the training data? It's trivial to add prompts encouraging product placement, but I would be completely shocked if the big players don't sell out within a year or two, and start biasing the models themselves in this way, if they haven't already.

reubenmorais · 1d ago
Google has been working on auctioning token-level influence during LLM generation for years now: https://research.google/blog/mechanism-design-for-large-lang...
willvarfar · 1d ago
And just over a year ago now the OpenAI "preferred publisher program" pitch deck to investors leaked. https://news.ycombinator.com/item?id=40310228
sph · 1d ago
Google: ruining their core product for that sweet ad money.
junga · 1d ago
Ads are Google's core product, isn't it?
Parae · 1d ago
Their core product is software meant to make sweet ad money.
NoMoreNicksLeft · 17h ago
Google's core product has always been advertisement. They sell advertisements to companies looking to advertise, and they bring in tens of billions in revenue from that business. In effect, their core product is you: they're selling your eyeballs.

If the bait that they used to bring you to them so they could sell your eyeballs has finally started to rot and stink, then why do people continue to be attracted by it? You claim they've ruined their core product, but it still works as intended, never mind that you've confused what their products actually are.

gofreddygo · 1d ago
> how long until advertisers get their grubby meathooks into the training data

You're so right. it's not an if anymore, but when. and when it does, you wouldn't know what's an ad and what isn't.

In recent years i started noticing a correlation between alcohol consumption and movies. I couldn't help but notice how many of the movies I've seen in the past few years promote alcohol and try to correlate it with the good times. how many of these are paid promotions? I don't know.

and now, after noticing, this every movie that involves alcohol has become distasteful for me mostly because it casts a shadow on the negative side of alcohol consumption.

I can see how ads in an LLM can go the same route, deeply embedded in the content and indistinguishable from everything else.

HSO · 1d ago
Ha, now try cigarettes/smoking! At least low level alcohol consumption is only detrimental to the drinker. Cigarettes start poisoning the air from the moment they are lit, and like noise pollution there is no boundary. I hate them or thrir smokers with a vengeance and the foreign satanic cabal that is „hollywood“ sold everyone out for their gold calf tobacco money
WesolyKubeczek · 1d ago
But a drunkard might sit behind the wheel, at which point it becomes detrimental to everyone on the road…

And there are countless books and movies where the hero has drinks, or routinely swigs some whisky-grade stuff from a flask on his belt to calm his nerves, then drives.

chgs · 1d ago
Driving itself kills more people in the us every month than 9/11, yet has been glamourised for a century
WesolyKubeczek · 22h ago
It's just that bad drivers are abundant in the US, and driving is way underregulated for such a car-centric country.
Lu2025 · 22h ago
Right? A woman comes home and immediately pours herself a large glass of red wine, without even washing hands or changing into home clothes. WHO DOES THAT? Pure product placement.
suddenlybananas · 1d ago
I think that your negative view of alcohol is making you a bit conspiratorial. It's an extremely deeply ingrained thing in western culture, you don't need to resort to product placement to explain why filmmakers depict it. People genuinely do have a good time drinking.
aleph_minus_one · 1d ago
> People genuinely do have a good time drinking.

This depends a lot on the person. I, for example, would much more associate "reading scientific textbooks/papers" with having a good time. :-D

suddenlybananas · 1d ago
Sure, I was using a generic sentence [1] not universal quantification!

[1] https://plato.stanford.edu/entries/generics/

immibis · 1d ago
It's that way because of successful marketing - just like smoking, or cars, or fast food.
graemep · 1d ago
People enjoyed drinking long before there was marketing. People have been enjoying alcohol for literally tens of thousands of years. It has been associated with celebrations for many thousands (e.g. Jesus giving people alcohol to keep a wedding reception going - and that is just something that comes to mind - I am sure there are MUCH earlier examples someone familiar with older stuff can come up with).

I would correct it to anti-alcohol sentiment being ingrained in American culture (as it is in some others, such as the Middle East) rather than western culture. Its an American hang-up, as with nudity etc.

zdragnar · 19h ago
Alcoholism was so rampant in the US that enough states ratified a constitutional amendment making it illegal.

It wasn't enough to kill alcohol consumption entirely, but it did cut back on the culture of overindulgence as measure by death rates before and the years after.

Other countries also banned alcohol in this time period, and new Zealand voted for it twice but never enacted the ban.

rightbyte · 1d ago
Beer, spirits etc was a big thing way before the printing press.
dhosek · 1d ago
I kind of look forward to freshman composition essays “written” with AI that are rife with appeals to use online casinos.
huskyr · 1d ago
Can't wait for all school essays promoting dubious crypto schemes of some sort.
nperez · 1d ago
I'm not going to disagree because greed knows no bounds, but that could be RIP for the enthusiast crowd's proprietary LLM use. We may not have cheap local open models that beat the SOTA, but is it possible to beat an ad-poisoned SOTA model on a consumer laptop? Maybe.
rolandog · 23h ago
If future LLM patterns mimic the other business models, 80% of the prompt will be spent preventing ad recommendations and the agent would in turn reluctantly respond but suggest that it is malicious to ask for that.

I'm really looking forward to something like a GNU GPT that tries to be as factual, unbiased, libre and open-source as possible (possibly built/trained with Guix OS so we can ensure byte-for-byte reproducibility).

rusk · 1d ago
On the flip side, there could be a cottage industry churning out models of various strains and purities.

This will distress the big players who want an open field to make money from their own adulterated inferior product so home grown LLM will probably end up being outlawed or something.

otabdeveloper4 · 22h ago
Yes, the future is in making a plethora of hyper-specialized LLM's, not a sci-fi assistant monopoly.

E.g., I'm sure people will pay for an LLM that plays Magic the Gathering well. They don't need it to know about German poetry or Pokemon trivia.

This could probably done as LoRAs on top of existing generalist open-weight models. Envision running this locally and having hundreds of LLM "plugins", a la phone apps.

jedbrooke · 1d ago
not quite ads in LLMS, but I had an interesting experience with google maps the other day. the directions voice said "in 100 feet, turn left at the <Big Fast Food Chain>". Normally it would say "at the traffic light" or similar. And this wasn't some easy to miss hidden street, it was just a normal intersection. I can only hope they aren't changing the routes yet to make you drive by the highest bidder
jerf · 20h ago
I've had this done at a sufficient variety of different places that I don't think it's advertising.

I'm also not particularly convinced any advertisers would pay for "Hey, we're going to direct people to just drive by your establishment, in a context where they have other goals very front-and-center on their mind. We're not going to tell them about the menu or any specials or let you give any custom messages, just tell them to drive by." Advertisers would want more than just an ambient mentioning of their existence for money.

There's at least two major classes of people, which are, people who take and give directions by road names, and people who take and give directions by landmarks. In cities, landmarks are also going to generally be buildings that have businesses in them. Before the GPS era, when I had to give directions to things like my high school grad party to people who may never have been to the location it was being held in, I would always give directions in both styles, because whichever style may be dominant for you, it doesn't hurt to have the other style available to double-check the directions, especially in an era where they are non-interactive.

(Every one of us Ye Olde Fogeys have memories of trying to navigate by directions given by someone too familiar with how to get to the target location, that left out entire turns, or got street names wrong, or told you to "turn right" on to a 5-way intersection that had two rights, or told you to turn on to a road whose sign was completely obscured by trees, and all sorts of other such fun. With GPS-based directions I still occasionally make wrong turns but it's just not the same when the directions immediately update with a new route.)

jedbrooke · 9h ago
Landmark based directions rather than street names does seem like a plausible explanation. I still have some childhood friends whose houses I don’t know the street address but I know how to get there

I still prefer street names since those tend to be well signed (in my area anyway) and tend not to change, whereas the business on the corner might be different a few years from now.

dizhn · 1d ago
I am still waiting for navigation software to divert your route to make sure you see that establishment. From your experience, it seems like we're close to that reality now.
collingreen · 1d ago
This is devilish. I'm adding your idea to my torment nexus list.
jedbrooke · 9h ago
oof, I’m not sure if I’m proud or ashamed of having an idea in the “torment nexus”. I believe I heard of the idea in some of the discussion surrounding a patent from an automaker to use microphones in the car for a data source for targeted ads. Combine that with self driving cars and you could have a car that takes a sliiight detour to look at “points of interest”
isoprophlex · 1d ago
"Continue driving on Thisandthat Avenue, and admire the happy, handsome people you see on your right, shopping at Vuvuzelas'R'Us, your place for anything airhorn!"
carlosjobim · 22h ago
Most users want the best directions possible from their maps app, and that includes easily recognizable landmarks, such as fast food restaurants.

"Turn left at McDonalds" is what a normal person would say if you asked for directions in a town you don't know. Or they could say "Turn left at McFritzberger street", but what use would that be for you?

Although I've had Google Maps say "Turn right after the pharmacy", and there's three drug stores in the intersection...

shwouchk · 1d ago
this is already happening in full force. sota models are already poisoned. leading providers already push their own products inside webchat system prompts.
J_McQuade · 1d ago
"here is how to to translate this query from T-SQL to PL-SQL... ..."

"... but if you used our VC's latest beau, BozoDB, it could be written like THIS! ... ..."

9 months, max. I give it 9 months.

mike_ivanov · 12h ago
"T-SQL to PL-SQL" -> (implies an > 40 age, most likely being an Ask TOM citizen, a consultant with >> 100K annual income, most likely conservative, maybe family with kids, prone to anxiety/depression, etc) -> This WORRY FREE PEACE OF MIND magic pill takes America by storm, grab yours before it's too late!
Lu2025 · 19h ago
> advertisers

This kind of ads is also impossible to filter. Everyone complains about ads on YouTube or Reddit but I never see any with my adblocks. Now we won't be able to squash them.

zerocrates · 1d ago
The providers can sell inclusion in the system prompt to advertisers. Run some ad-tech on the first message before it goes to the LLM to see whose gets included.
kijin · 1d ago
For most advertisers, sure, there's no need to go all the way back to the training data. Advertisers want immediate results. Training takes too long and has uncertain results. Much easier to target the prompt instead.

If you're someone like Marlboro or Coca-Cola, on the other hand, it might be worth your while to pollute the training data and wait for subtle allusions to your product to show up all over the place. Maybe they already did, long before LLMs even existed.

rightbyte · 21h ago
The annoying part is that we part of the "pollution" since we namedrop Coca Cola etc.
robocat · 12h ago
> Marlboro or Coca-Cola

Your product placement is appropriately ironic.

morkalork · 1d ago
I can absolutely assure you that SEO companies are already marketing AI strategies oriented around making content easily and preferentially consumable by LLMs and their vendors.
paulgerhardt · 1d ago
GEO model relevance is the only thing that matters: https://a16z.com/geo-over-seo/
rodgerd · 1d ago
There are already companies promising to attack Wikipedia and product LLM-bait YouTube content. Ship's sailed.
vintermann · 1d ago
Sure, but what makes you think they will actually deliver that? There's no honor among spammers. If there's an obvious idea with new tech, 100 sleazy startups will claim to offer it, without even remotely having it.
moron4hire · 1d ago
It's kind of already happening. For example, if you ask an LLM for advice on building an application, it's going to pigeon-hole you into using React.
WesolyKubeczek · 1d ago
That’s because of statistical likelihood and abundance of web content about React which seems to be kinda default choice. Had to be a looong con if it was an ad.
pbhjpbhj · 1d ago
Are people putting up vast arrays of websites to promote products/politics solely to sway LLM-feeding crawlers yet?
Lu2025 · 19h ago
I've seen those content mills since before Covid.
hnbad · 1d ago
Or for a more concerning example, GitHub is owned by Microsoft who want to sell cloud services so it stands to reason it would be in their interest to have GitHub Copilot steer developers towards building applications using architectural patterns that lend themselves more to using those cloud services, e.g. service-oriented architecture even when it is against the developer's interests.

This doesn't have to be as blunt as promoting specific libraries or services and it's a bias that could even be introduced "accidentally".

drdrek · 22h ago
Assume that if you thought about it its already too late. I've been to an AI SEO session by our VC. It was a guide on how to find chatbot primary sources for a keyword and then seeding that source with your content.

Advertisers and spammers have the highest possible incentive to subvert the system, so they will. Which is only one step worse (or better depending on your view) than letting a mega corp control all the flow of information absolutely.

Welcome to the new toll booth of the internet, now with 50% less access to the source material (WOW!), I hope you have a pleasant stay.

aardvarkr · 1d ago
I think for the moment the leading AI companies are strongly incentivized to not succumb to the advertising curse. Their revenue is subscription driven and the competition is ridiculously fierce and immune to collusion. Everyone is trying to one-up everyone else and there is no moat that locks you into a single product. Their incentive is to score as high as possible on benchmarks in order to drive up their user base in order to increase subscriptions. Any time spent on implementing advertising is time their adversaries are spending making their models better. Let’s hope the competition stays fierce so that we don’t get enshitification anytime soon
myaccountonhn · 20h ago
You can start with subscription, then add ads on top. When you constantly need growth that's kind of the logical conclusion.
galaxyLogic · 1d ago
The adds can be outside of the AI reply-pane. Just like adds are outside of Google search results.
pbhjpbhj · 1d ago
Not anymore they're not, they're tightly integrated.
immibis · 1d ago
? Google search ads aren't outside of the results. They used to be, until they realized they got more clicks if they weren't.
micromacrofoot · 1d ago
fortunately for our investors we have found a way to solve this with more ai
quesera · 18h ago
"AI is like XML — if it's not working for you, you're not using enough of it."
elif · 21h ago
Completely agree with this post I don't even think he's exaggerating.

I tried to search the full name of a specific roof company in my area in quotes, and they weren't in the first page of results. But I got so many disclosed and not disclosed ads for OTHER contractors.

SEO has turned search engines into a kind of quasi-mafia "protection" racket.. "oh you didn't pay your protection fee, wouldn't it be a shame if something happened to your storefront?"

WiggleGuy · 1d ago
I built a portal that makes it easier to query against multiple different search engines (https://allsear.ch/). It's open source, free, all that. I must say, building it really expanded my view of the internet.

I am also a heavy Kagi and Reddit user for search, and usually that's enough. But when it's not, its concerning how much better other search engines can be, especially since non-tech savvy folks will never use them.

ccvannorman · 17h ago
using my default browser (brave) and pressing "enter" (doing a search) did not do anything. The page just sits there.

apparently, I need to make a selection of a search engine to use this.

I would not use this as a replacement for my duckduckgo or google searches simply because of the UX of not being able to type a query and press "enter" as the default.

WiggleGuy · 4h ago
That's fair.

You can probably hack that experience by making use of the "rules" feature. You can have certain search engines or macros launch automatically upon pressing enter based on the content of the query. You if you set a rule to check if your search contains a vowel (which most will), it's effectively a catch all rule.

Hacky, but it will work.

BrenBarn · 1d ago
This is another article that, for me, sort of walks right by the answers without realizing it. As I was reading, I was thinking "does this person really not think AI is going to be flooded with ads soon enough?" Then they asked the LLM that, and it basically said yes, and then the response. . . to go "Hmmm, I wonder if that will happen"? Yes, of course it's going to happen. Imagine if this was 20 years ago wondering whether ads would infect search engines or the web would be flooded with sites which are ads masquerading as actual content. Why would we believe anything different of AI? The only way it won't happen is if we decide we don't want it, instead of accepting it as inevitable.

And well, the article is ostensibly about AI, but then at the end:

> The investors aren’t just doing this to be nice. Someone is going to expect returns on this huge gamble at some point. > ... > The LLM providers aren’t librarians providing a public service. They’re businesses that have to find a way to earn a ridiculous amount of money for a huge number of big investors, and capitalism does not have builtin morals.

Those are the things that need to change. They have nothing to do with AI. AI is a symptom of a broken socioeconomic system that allows a small (not "huge" in the scheme of things) number of people to "gamble" and then attempt to rig the table so their gamble succeeds.

AI is a cute bunny rabbit and our runaway-inequality-based socioeconomic system is the vat of toxic waste that turned that innocent little bunny into a ravening mutant. Yes, it's bad and needs to be killed, but we'll just be overrun by a million more like it if we don't find a way to lock away that toxic waste.

itzjacki · 22h ago
> They have nothing to do with AI.

Not inherently, but I think LLM services (and maybe other AI based stuff) are corruptible in a much more dangerous way than the things our socioeconomic system has corrupted so far.

Having companies pay to end up on the top of the search engine pile is one thing, but being able to weave commerciality into what are effectively conversations between vulnerable users and an entity they trust is a whole other level of terrible.

slfnflctd · 21h ago
> Imagine if this was 20 years ago wondering whether ads would infect search engines or the web would be flooded with sites which are ads masquerading as actual content

Many of us - naively, in hindsight - really did hope this wouldn't happen at the scale it did, and were appalled at how many big players actively participated in speeding up the process.

I guess it's similar to how a lot of white folks thought racism was over until Obama came along and brought the bigots out of the woodwork.

> lock away that toxic waste

The jarring conclusion I keep trying to see a way around but no longer can is that the toxic waste is part of humanity. How do we get rid of it, or lock it away? One of the oldest questions our species has ever faced. Hard not to just throw up your hands and duck back into your hidey-hole once you realize this.

BrenBarn · 11h ago
> Many of us - naively, in hindsight - really did hope this wouldn't happen at the scale it did, and were appalled at how many big players actively participated in speeding up the process.

Sure, maybe so. But now with hindsight we can see what happened and we should realize that it's going to happen again unless we do something.

> The jarring conclusion I keep trying to see a way around but no longer can is that the toxic waste is part of humanity. How do we get rid of it, or lock it away? One of the oldest questions our species has ever faced. Hard not to just throw up your hands and duck back into your hidey-hole once you realize this.

I think both bad and good are part of humanity. In a sense this "toxic" part is not that different from the part that leads us to, say, descend into drug addiction, steal when we think no one is looking, leave a mess for other people to clean up, etc. We can do these negative things on various scales, but when we do them on a large scale we can screw one another over quite egregiously. The unique thing about humans is our ability to intentionally leverage the good aspects of our nature to hold the bad aspects in check. We've had various ways of doing this throughout history. We just need to accept that setting rules and expectations and enforcing them to prevent bad outcomes is no less "natural" for humans than giving free rein to our more harmful urges.

hennell · 16h ago
It's an interesting article although I think it's rather telling that the authors search of "postgres slow database" seems to disappear in the LLM section. It mentions the adds disappeared, but no mention of the solutions found or the amount of time or changes to how they searched/added the question.

I've found AI helpful for answering questions, but better at plausibly answering them, I still end up checking links to verify what was said and where it's sourced from. It saves frustration but not really time.

Peteragain · 1d ago
A proposal for a solution. The data is the unique selling point. Put that in public hands with an API, published algorithms, and it's own development team. The free market can then sell user interfaces, filters, and whatever. The metaphor is roads (state managed) and vehicles (for profit). Today I can (physically) go to the British Library and get any published book, or go on line and pay for the privilege.
Workaccount2 · 19h ago
This example falls apart because libraries are paid for by taxes.

I wish I could violently shake every internet user while yelling "If you are not paying money for it, you cannot complain about it"

The librarian is selling you a vuvuzela because that is the only way the library has been able to keep the lights on. They offered a membership but people flipped out "Libraries are free! I never had to pay in the past! How dare you try and take my money for a free service!". They tried a "Please understand the service we provide and give a donation" but less than 2% of people donated anything. Never mind that there is a backdoor that you can use, allowing you to never need to interact with a librarian while fully utilizing the libraries services (that the library still pays for).

The internet was ruined by people unwilling to pay for it. And yes, I know the internet was perfect in 1996, I have a pair of rose colored glasses too.

culebron21 · 15h ago
Absolutely agree. Especially Google search results became generic and useless. Their Youtube search is a list of 3 generic links and then just complete junk. DuckDuckGo had been lagging behind Google for years, but around 2022 it became on par if not superior.
pajamasam · 23h ago
SEO spam was still easy enough to spot and skip through in search results before the masses of LLM-generated content took over.

They seem to generate extremely specific websites and content for every conceivable search phrase. I'm not even sure what their end goal is since they aren't even always riddled with affiliate links.

Sometimes I wonder if the AI companies are generating these low-quality search results to drive us to use their LLMs instead.

Retr0id · 23h ago
> they aren't even always riddled with affiliate links.

Presumably the goal is to build up a positive-ish reputation, before they start trying to monetize it. Or perhaps to sell the site itself for someone else to monetize, on the basis of the number of clicks it's getting per month.

gerdesj · 1d ago
I am deliberately keeping away from LLMs for search. I'm old enough to remember finally ditching Altavista for the new upstart Google. I did briefly flirt with Ask Jeeves but it was not good enough.

I don't think anyone has it sorted yet. LLM search will always be flawed due to being a next token guesser - it cannot be trusted for "facts". A LLM fact is not even a considered opinion, it is simply next token guessing. LLMs certainly cannot be trusted for "current affairs" - they will always be out of date, by definition (needs training)

Modern search - Goog or Bing or whatever - seem to be somewhat confused, ad riddled and stuffed with rubbish results at the top.

I've populated a uBlacklist with some popular lists and the results of my own encounters. DDG and co are mostly useful now, for me.

myself248 · 1d ago
I miss Altavista every day. Case-sensitive search is how you tell DOS from DoS. Putting "exact phrases" in quotes no longer seems to work. Then they added insult to injury by forcing you to +mandate a term otherwise they might just ignore it. Now that no longer works either.

I've entirely given up on Google.

I've made extensive shortcuts so I can directly search various sites straight from my location bar: wikipedia, wiktionary, urbandictionary, genius, imdb, onelook, knowyourmeme, and about two dozen suppliers/distributors/retailers where I regularly shop.

If I need something that's not on that list, I'll try some search engines but I start with the assumption that I'm not going to find it, because the battle for search is lost.

SoftTalker · 1d ago
> I've entirely given up on Google.

I have used Google very little for about 3 years now. Sometimes when DDG fails to find what I'm looking for I'll try Google. It rarely works better.

spauldo · 1d ago
It's really strange, while I agree Google's results aren't as good as they used to be, they're still miles ahead of DDG for me. Is it because I still use keyword search like it's the early 2000s?

I tried to switch to DDG because Google was blocking Hurricane Electric IPv6 tunnels. DDG is still my homepage but I usually end up clicking the bookmark I made for ipv4.google.com. I wish I knew why DDG works for all you people but it's horrible for me.

jononor · 19h ago
Does you Google actually respect the keywords? For me, most of the times it replaces words with "synonyms" (mostly wrong context or not really replaceable). And results are pretty crap as a result - no what I was looking for, but just much more common/generic stuff.
spauldo · 17h ago
If I put them in quotes, yeah.
cheschire · 23h ago
Isn’t DDG basically Bing with a privacy layer?
spauldo · 16h ago
That's what I've been told. I haven't tried Bing directly because... eww... but I assume the results would be similar to DDG.
sgarland · 1d ago
Altavista was the OG. I remember it being cantankerous and requiring you to specific in how you searched, but if you knew how to use it, it was unmatched. Until Google.
devilbunny · 1d ago
It was fast, which almost nothing else was at the time.

And if people on dialup connections think you’re slow, it’s because you are.

ars · 1d ago
When Google came out it was way better than Altavista, people switched instantly. Specifically Altavista looked at how often a search term was in the result, which wasn't always a helpful thing. Google also noticed if search terms were near each other in a page which was really helpful, otherwise you would get forums with one search term in one message, and the other far away in an unrelated message. Google fixed that.

The web has changed these days, it's an adversarial system now, where web results are aggressively bad and constantly trying to trick you. Google is much harder to implement now.

myself248 · 1d ago
When Google came out, I started using it for some things, because yes it was better at some things, but I didn't stop using Altavista. Stayed with it until the very end, for cases where I could be certain that I knew the exact words that'd be in the page, and Google just sucked at that.

These days I can't even -exclude terms that I know would only appear in the wrong results, Google will show me those results anyway. Nothing about adversarial SEO requires them to ignore my input, that's a different choice.

Lu2025 · 19h ago
Correct. Google became unusable around 2020. I search Wikipedia directly and rely on Duck for other needs. As rudimentary as it is for uncommon languages such as Ukrainian, DDG it's still better than Google. Shame on them.
username223 · 18h ago
> Putting "exact phrases" in quotes no longer seems to work. Then they added insult to injury by forcing you to +mandate a term otherwise they might just ignore it. Now that no longer works either.

I don't understand why they got rid of these escape hatches. Sometimes I want the "top" pages containing precisely the text I enter -- no stemming, synonyms, etc. Maybe it shouldn't be the default, but why make it impossible?

In my ideal search world, there would also be an option to eliminate any page with a display ad or affiliate link. Sometimes I only want the pages that aren't trying to make money off of me.

rustcleaner · 17h ago
I have a solution: search engine which uses machine learning to score the "commercialness" of a page. By commercialness, I mean: is it a table of products with prices; does it have buy buttons; does it use a lot of tracking and analytics; does it have a cart; is there a lot of product talk (and is it overbiased positively); how are all the pages within a couple link-degrees scoring; ... (and more). Then, give users a slider which right side means no filtering, left side means basically only return universities, Wikipedia, and PBS tier results.

This has to track number of ads and trackers in a page and not just be about product pages. This measure should also fight SEO spam, as the tracking and advertising elements would cause SEO spammers to lose rank on the engine (disincentivising an arms race).

Add in the patently obvious need for the poweruser's 2nd search bar, which takes set notation statements and at least one of a few popular powerful regex languages, and finally add cookie stored, user-suppliable domain blacklists and whitelists (which can be downloaded as a .txt and reuploaded later on a new browser profile if needed). I never ever want to see Experts Exchange for any reason in my results, as an immediately grasped example. Give the users more control, quit automagicking everything behind a conversationally universal idiot-bar!

username223 · 16h ago
I use uBlacklist to get rid of Expert Sexchange and similar low-quality sites in search results, and it seems to work well enough.

An "advanced mode" supporting literal keywords (with and without stemming) and boolean operators wouldn't cost the search companies anything. I think supporting regexp search would be hard: do you search your index for fixed substrings and expand around them? I'm not a search person...

I don't think you'd need much in the way of machine learning to filter out the spam. There are relatively few third-party display ad servers and affiliate networks, and those are the main lazy ways to make money. There's no need to filter out all commercial content; just getting rid of the "passive income" bros would be enough.

shpx · 1d ago
If you ask ChatGPT 4o about a current event it will google things (do some sort of web search) and summarise the result.
vintermann · 1d ago
and often I have to tell it to don't search, because it will just pull SEO polluted answers from Google and launder then slightly.
hug · 1d ago
The author would do well to check out Kagi -- the search results for all of the suggested queries are, to my eye, better than the results they found.

Given that Kagi's higher tier plans come with search-enabled LLM chat interfaces, and those searches use Kagi's results (which, again, appear to be superior) it seems to me that you get the best of both worlds: Better search, and better search results to feed into your search-enabled LLM queries.

I am not affiliated with Kagi or anything, it's just honestly that good a product.

bigstrat2003 · 1d ago
I second the recommendation for Kagi. It just blows all competitors out of the water in my opinion, and I'm happy to pay for it as long as they keep that quality up.
mplanchard · 1d ago
Third. The ability to uprank sites makes the “finding known content” use case absolutely amazing. The specific postgres docs case in the blog post is one I am constantly using kagi as my external brain for.
pjerem · 1d ago
Happy customer of Kagi here too. However if I’m being honest, I’m starting to get LLM generated content in Kagi results.

Don’t think it’s their fault or that it happens more than everywhere else neither what they could do about it but it happens.

sph · 1d ago
I am pretty sure 50% of my search results on Kagi (or Google or Bing) are LLM-generated. You can search anything and find a dozen websites that, surprise, have a page dedicated exactly to that topic, organized in three or four neat sections.

The internet is dead and is starting to smell.

rkaveland · 18h ago
Author here. I will be trying out Kagi for some time. Search is worth paying money for. Thanks for the heads-up!
galaxyLogic · 1d ago
I've been using Preplexity lately while learning to use Yamaha SeqTrack synthesizer. I have the UserGuide, but I find it easier and faster to ask Preplexity for things like "How do I mute a track?".

It never occurred to me I could use Google this way. And it is a novel idea to me, that it seems to be better to use AI than read a manual.

otabdeveloper4 · 22h ago
If Google's LLM is trained on the SeqTrack manual, then yes.

If it's not then it will just invent some plausible-souding bullshit that doesn't actually work.

After the fifth time you get burnt by this the whole LLM experience starts to sour.

galaxyLogic · 4h ago
Yes. I wonder what's LLM providers' approach to ensuring high-quality of training materials?
emacdona · 17h ago
Lately, I've been more and more disappointed with Kagi. It seems to be falling victim to the same SEO spam. For example, today I was wondering what Scala devs use as alternative to Akka. I searched for "scala akka alternative". Try the same (using Kagi). I get mostly listicles and SEO spam. I have to scroll halfway down the page to see links to "Pekko" and "Zio"... and they are hard to separate from the noise. This is just my most recent example... it just "feels" like this happens a lot more, even with Kagi now.

However, the reason I started paying for Kagi was because they let me completely block websites from search results -- and they still let me do that. That feature alone will keep me as a paying customer for the near term.

emacdona · 17h ago
To be fair to Kagi:

This was in the top three results, and I can't tell if it's "real", or just a page created to capture those exact search terms!

https://akkaalternatives.com/

If the goal of that site really is just to capture clicks via very specific web searches... and people are creating sites like that at scale... what hope do we have of saving the web? :-(

GuinansEyebrows · 13h ago
i would probably pay for kagi if there was a premium tier that excluded LLM features.
Kiyo-Lynn · 1d ago
The current search engines really feel like a librarian who's always trying to sell you something. I just want to find a simple answer, but I keep getting led to all kinds of other pages. I believe if search engines were more like public libraries, focused on providing information rather than recommending things for commercial reasons, the experience would be so much better.
therein · 1d ago
Did Google take another nosedive? I genuinely truly did not use it once in the last 6 months. Kagi actually works as a complete substitute now.
gkn · 15h ago
Google is the greatest shopping center in the world and I must admit; I really do like it. Must be more shoppers around than the information seekers it seems.
iamkeithmccoy · 1d ago
> AIs are much better at responding to my intent, and they rarely attempt to sell me anything

Yet. It's only a matter of time before AI becomes ad-riddled and enshittified.

horsawlarway · 18h ago
So at least in one sense - there is a tantalizing prospect here for AI that was lost for search engines.

Model weights are snapshots, and we can preserve them.

It would be like if we could keep a snapshot of the search index for google every 6 months. Doesn't matter if the "current" version is garbage, if my search target exists in an older copy that's not as corrupted, and I can choose to use that instead.

And at least this time around, I think this was built in from the start - you pin against a specific model for most serious business use-cases.

I can store open model weights locally, cheaply.

---

So I 100% agree that the ads are going to come (I can't forsee any possible alternative outside of banning ad based content promotion - which as an aside... I'm strongly in favor of proposing as serious legislation, particularly in the context of AI).

But this time the ad riddled version still has to be better than the old version I can boot up and run.

It'll be interesting to see how that tension plays out.

bdangubic · 1d ago
they won’t as long as you are paying $200+/month
vintermann · 1d ago
Bold to assume they won't try to double-dip and profit from you in more than one way. Cable TV got ads, subscription newspapers got ads (and advertorials). Just because it's not free doesn't mean you aren't the product.
lesuorac · 19h ago
Cable never claimed to be ad-free; it's an old wives tale.
carlosjobim · 22h ago
In that moment you cancel your subscription. Why does this argument come up again and again? The past, present, and future are not the same thing. You should especially notice that when having a meal.
chgs · 1d ago
If they can make an extra $20 from adverts on to they will.

Sky TV makes 10 times as much from subscription as adverts but spends 30% of the time showing you adverts. London underground revenue is a similar ratio - for every £9 tickets they make £1 in adverts. If I go to the cinema they spend 20 minutes showing adverts to people who spent £50 on tickets and popcorn.

Companies shave very little incentive not to make things shit with adverts. The measurable cost to them is tiny, the cost to the rest of the world is massive. Odeon won’t attribute lost revenue from my reduced visits to their adverts, but will measure the 50p or whatever they get.

mrguyorama · 14h ago
How did that work for cable and satellite TV and streaming services and and and
pengaru · 16h ago
You'd have to be pretty daft to not imagine your future LLM conversations being corrupted by private interests, in far more subtle ways than the obvious ads littering search results.
jay_kyburz · 1d ago
The web is in such a bad state, I think there is probably an opportunity for traditional publishers (or somebody else, like a university) to start a walled garden web. A vast trove of interesting, and valuable information on every topic. No shopping, no ads. Just the content you would find in the library.

People used to spend money on books and magazines, I'm sure some of them could be convinced to sign up for a Netflix of books and magazines.

alisonatwork · 1d ago
We already have that and it's Wikipedia.
jay_kyburz · 1d ago
Can you imagine how much better it would be if people were paid to create the content!
bawolff · 1d ago
I think it would be worse. Money tends to ruin collaborative communities.
melagonster · 1d ago
They should offer more money. Wikipedia invited some professor writing. thye take responsibility for their whole life on job.
aleph_minus_one · 1d ago
Professors don't necessarily want to waste their time fighting with Wikipedia admins.
ars · 14h ago
It doesn't need to be collaborative, people write books for money all the time.

It's how the vast majority of human knowledge has been stored and perpetuated for millennia.

This new business of people writing up their knowledge for free (wikpedia, stackexchange, forums, reddit, etc) is relatively new, and only semi-working.

yreg · 1d ago
I don't think it would (and could) be any better.

That being said Wikipedia is not a substitute of the "old" web.

magarnicle · 1d ago
So why aren't real libraries like this? Can you fund the internet the same way?
fancyfredbot · 1d ago
In the UK 190 libraries have been closed in the last five years. Before closure some of these only opened 2 days a week. It's common for libraries which remain open to be staffed by unpaid volunteers. My point is that the funding model for libraries is not exactly a massive success! I expect these libraries would have accepted advertising funding before they chose closure.

I'm not aware of any advertising funded libraries although I'm sure the idea has been considered. I think that in the physical world the cost per visitor in a library is probably too high for such a funding model to be successful. Also the library is not typically visited by the highest value advertising targets limiting the amount which can be raised this way. I think these are probably the real reason why real libraries aren't like this!

graemep · 1d ago
I think that is more the result of bad government than anything else. Libraries are a low profile resource with long term benefits (e.g. improving education and quality of life) in ways that are not immediately captured by metrics.

They are incredibly good value for money, if you understand the benefits and take a long view. However, politically they are an easy thing to cut.

fancyfredbot · 21h ago
This is nothing to do with libraries being good or bad value though.

Libraries are not advertising funded because advertisers are not very interested.

If advertisers were interested then these libraries would likely be just as infested with adverts as the internet.

If we tried to fund the internet in the same way as libraries advertisers would still be strongly incentivised to show adverts and would still work hard to find ways to do so. So it's not clear the situation would change much if there was government funding for content creators.

DoctorOW · 1d ago
The equivalent of the library for online research IS real libraries. Your library card likely gives you access to ad free online resources, I know mine does and I've been a patron of several libraries as I've moved around.
hnbad · 1d ago
Because real libraries are not for-profit enterprises and have a negative (or at least extremely unimpressive) ARR. Incidentally their leadership positions also don't provide anywhere near the levels of compensation you'd expect for a C-level position at an equally large company.

Asking for "the Internet" to be funded the same way as real libraries is quite the contrary to the dominant cultural narrative which asks for public services and entire governments to be operated "like a business", which usually means cutting funding, selling assets, doing layoffs and eventually scrapping it for parts once it has become predictably defunct.

ChrisMarshallNY · 23h ago
> I would rather use a library where the librarians don’t have financial incentives to show me certain kinds of books more often than others.

Oh, you sweet summer child...

I have family that used to be in charge of dealing with institutional corruption. In particular, public service corruption.

It's bad. Very, very bad. When "public servants" are paid less than their private counterparts, are routinely treated like crap by their employers, as well as those they serve, and they are in charge of services that could be incredibly lucrative to others, you're guaranteed to get corruption.

"Let's just use AI!" is the rallying cry.

Now, let's examine a scenario, where the folks that can make money from the service, also run the tools that implement the service...

rightbyte · 21h ago
What are you claiming here? That librarians get kickbacks on certain books?
ChrisMarshallNY · 21h ago
Probably in that case (the hypothetical), but everyone knows librarians do it for the bananas.

ook

jeffbee · 16h ago
There is a good article scattered within this article, but unfortunately it is hard to take seriously when it claims to be based on "research" entitled "Is Google Getting Worse" which is one of the most misleading papers of the last few years. That's the title and everyone just closes the tab. University researchers have proved that Google is getting worse! There was a paper! But the problem with the paper is that none of the search results it evaluates are from Google.
insane_dreamer · 16h ago
I think it's a bit naive to think that companies like OpenAI will be content with their monthly user subscription fees and not seek to monetize their content through ads. Yahoo and Google didn't have ads either at the beginning (neither did Facebook or Instagram). And the reason why Google became the dominant tech company aka money-maker it is today is because of Adwords.
NoMoreNicksLeft · 17h ago
>Yet these days I find that I often can no longer convince the search engine to give me what I want, even when I know it exists and can describe the shape in detail. These days, I find that I am using multiple search engines and often resort to using an LLM to help me find content.

A couple months ago, I spent a week or two writing some shell scripts to exhaustively mine one of those pdf hosting companies, looking for digital copies of Paste magazine. I only became aware that they might still exist after having spent at least a week trudging through Wayback Machine's archives of the old Paste website. I think I managed to get 8 or 9 issues total.

Search is dead. There was a time when I could probably have found those with a careful Google search in under an hour.

zkmon · 21h ago
Selling has poisoned human life. As someone who grew on a farm, with a culture that never bothered to sell anything, and never cared to impress anyone, we pitied on the pathetic business/sales people and anyone who tries to impress other through art/acts etc. The street performer such as drama artists and acrobats got some kind give-aways.

Today, I can't watch any TV without immediately realizing that every face I see on TV is forced to sell their expression and talk. They are basically selling, not expressing their true feelings. Every great movie, actor, great singer, great anchor - everyone. There is nothing natural in human interactions any more.

DavidPiper · 1d ago
> The LLM providers [are] businesses that have to find a way to earn a ridiculous amount of money for a huge number of big investors, and capitalism does not have builtin morals. What externalities will the broader public end up paying?

I actually have a theory about this. I hate it, but I can absolutely imagine this future.

I'm going to specifically talk about the software engineering industry, but let's assume that LLMs progress to the stage where "vibe coding" can be applied to other areas ("vibe writing", "vibe research", "vibe security", "vibe art", "vibe doctors", "vibe management", "vibe CEOs", etc.)

It only takes a few years of "vibe coding graduates" to be successful in their work to create a new class of software engineer - this is in fact what AI companies are actively encouraging / envisioning as the future. Assuming this happens in the next few years, we're still in the phase where AI companies are burning money acquiring as many of these users as possible.

In about 5 years, some of those vibe coders will become vibe managers, and executives will no doubt be even more invested in LLMs as the solution to their problems.

At a certain tipping point, a large part of the industry can't actually function effectively without LLMs. I don't know when this point will be, but vibe coders (or other vibe <industry>ers) don't have to be a majority, they just have to be a large enough group.

Suddenly AI companies have all their losses called in and they have to pay back their VCs.

LLM usage prices skyrocket.

----

Four things happen across 2 axes:

- [A] Companies that can afford to pay skyrocketing LLM costs, vs. [B] those that can't

- [C] Companies that have reached a critical mass of vibe coders, vs. [D] those that haven't

----

[BC] These companies collapse. They don't have talent and they can't afford to outsource it to LLMs anymore.

[BD] These companies lay off all their vibe coders because they can't afford LLMs anymore. They survive on the talent they retain, but this looks very different if you're a large or small business. Small businesses probably fail too.

[AC] These companies see an enormous increase in costs that they cannot avoid. Large layoffs likely, but widespread vibe coding continues.

[AD] These companies have a decision to make: lay off all their vibe coders, or foot the LLM bill. The action they take will depend on their exact circumstances. Again, most small business in this situation probably fail.

---

The real question is, for the surviving vibe companies [AC, AD]. Will they be able to sustain such high costs in the long-run, and even if they can, will enough be able to sustain them to successfully pay back all the AI companies' losses to that date?

Interesting times ahead, maybe.

petesergeant · 1d ago
I mean sure, but also, I feel like the ability to query an LLM for something is an invaluable resource I never had before and has made knowledge acquisition immeasurably easier for me. I definitely search the web much, much less when I'm trying to learn something.
cbsmith · 1d ago
Some curious misconceptions here about how the search industry works.

There's a very strong financial incentive for ad-powered search engines to keep SEO spam out of search results, because that makes advertisers more willing to pay for search placement. A publicly run search engine would not have those incentives and would be at if anything graver risk of a "tragedy of the commons" type scenario where the engine is overtaken with spam.

Yes, there are perverse incentives to populate search engine results with paid placements, but the best corrective force I can think of is having more competition in the search space. As long as people are willing to try other search engines (spoiler: for the most part, they are currently not), this creates a strong incentive to ensure that paid placements that harm the search experience are kept to a minimum.

...and I think the concerns about profitability of the LLM space is completely missing the larger agenda. Even if public use of OpenAI and its competitors NEVER TURNS A PROFIT, there is tremendous economic opportunity that investors expect to realize from a company with intelligent/powerful LLMs. That is why they are pumping so much money into these companies.

willvarfar · 1d ago
This stopped being true when Google brought Doubleclick?

Google gets money from sending users to ad-laden sites. They get to double-dip.

If Google _didn't_ own Doubleclick etc, then there would be an incentive for them to prioritise content over content farms etc.

nehal3m · 1d ago
This reeks of “communism is when capitalism” to me, because:

>A publicly run search engine would not have those incentives and would be at if anything graver risk of a "tragedy of the commons" type scenario where the engine is overtaken with spam.

But then how come that is exactly what is happening with modern search engines? It’s just always advertising that comes along and fucks up a good thing.

hnbad · 1d ago
If "the government" were to run Google Search, it would be bad because nobody cares about making it good because profit is the only motivating force for organizations.

Because Google runs Google Search, it's instead only bad because profit motivates Google to push its services, increase ad impressions/interactions and incentivize users to not actually leave the search engine result page (e.g. by citing or summarizing content of the related web pages on the result page).

And because competition is good that means Google actively competing with the sites it indexes for the attention of its users is a good thing even if those sites losing out and failing would result in worse search results over time.

Hang on, maybe "content stealing" really already was a problem before LLMs made it their entire MO and those greedy newspaper publishers were onto something when they complained about Google lifting their news feeds even if it provided "exposure" for them.

cess11 · 1d ago
"There's a very strong financial incentive for ad-powered search engines to keep SEO spam out of search results"

No. Shitty results keep people exposed to ads for longer, on average.

'A publicly run search engine would not have those incentives and would be at if anything graver risk of a "tragedy of the commons" type scenario where the engine is overtaken with spam.'

You might be pleasantly surprised by the main work of the 2009 Nobel prize winner in economics, Elinor Ostrom's Governing the Commons.

eviks · 1d ago
> These days, I find that I am using multiple search engines and often resort to using an LLM to help me find content.

> This is just my anecdata

Not a single example, so it's not even that, just vibes

> the past few years. There’s recent science to read about the quality of search.

Let's look at the "science"

> We monitored Google, Bing and DuckDuckGo for a year on 7,392 product review queries.

Oh, so there is nothing about the past few years either. And this isn't "search", but a very narrow category of search that is one of the prime targets for SEO scam, so it's always been bad, and the same incentives made review content highly suspect before any SEO was ever involved.

Which multiple engines can help you here? Which LLMs?

GnarfGnarf · 21h ago
What makes the author think we are entitled to free search? How much would you pay for ad-free search? The “golden era” of free search was just setting us up for the plucking.