Web search on the Anthropic API

255 cmogni1 57 5/7/2025, 8:18:47 PM anthropic.com ↗

Comments (57)

cmogni1 · 18h ago
I think the most interesting thing to me is they have multi-hop search & query refinement built in based on prior context/searches. I'm curious how well this works.

I've built a lot of LLM applications with web browsing in it. Allow/block lists are easy to implement with most web search APIs, but multi-hop gets really hairy (and expensive) to do well because it usually requires context from the URLs themselves.

The thing I'm still not seeing here that makes LLM web browsing particularly difficult is the mismatch between search result relevance vs LLM relevance. Getting a diverse list of links is great when searching Google because there is less context per query, but what I really need from an out-of-the-box LLM web browsing API is reranking based on the richer context provided by a message thread/prompt.

For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.

It's possible to do this reranking decently well with LLMs (I do it in my "agents" that I've written), but I haven't seen this highlighted from anyone thus far, including in this announcement.

simple10 · 18h ago
That's been my experience as well. Web search built into the API is great for convenience, but it would be ideal to be able to provide detailed search and reranking params.

Would be interesting to see comparisons for custom web search RAG vs API. I'm assuming that many of the search "params" of the API could be controlled via prompting?

peterldowns · 14h ago
> For example, writing an article about the side effects of Accutane should err on the side of pulling in research articles first for higher quality information and not blog posts.

Interesting, I'm taking isotretinoin right now and I've found it's more interesting and useful to me to read "real" experiences (from reddit and blogs) than research papers.

TuringTourist · 14h ago
Can you elaborate? What information are you gleaning from anecdotes that is both reliable and efficacious enough to outweigh research?

I'm not trying to challenge your point, I am genuinely curious.

peterldowns · 13h ago
I just want to hear about how other people have felt while taking the medicine. I don't care about aggregate statistics very much. Honestly what research do you read and for what purpose? All social science is basically junk and most medical research is about people whose bodies and lifestyles are very different than mine.
TechDebtDevin · 14h ago
Wear lots of (mineral) sunscreen, and drink lots and lots of water. La Roche Posey lotions are what I used, and continue to use with tretinoin. Sunscreen is the most important.
peterldowns · 14h ago
Great advice, already quite on top of it. I'd recommend checking out stylevana and importing some of the japanese/korean sunscreens if you haven't tried them out yet!
minimaxir · 19h ago
The web search functionality is also available in the backend Workbench (click the wrench Tools icon) https://console.anthropic.com/workbench/

The API request notably includes the exact text it cites from its sources (https://docs.anthropic.com/en/docs/build-with-claude/tool-us...), which is nifty.

Cost-wise it's interesting. $10/1000 queries is much cheaper for heavy use than Google's Gemini (1500 free per day then $35/1000) when you'd expect Google to be the cheaper option. https://ai.google.dev/gemini-api/docs/grounding

istjohn · 17h ago
Well also Google has put onerous conditions on their service:

- If you show users text generated by Gemini using Google Search (grounded Gemini), you must display a provided widget with suggested search terms that links directly to Google Search results on google.com.

- You may not modify the text generated by grounded Gemini before displaying it to your users.

- You may not store grounded responses more than 30 days, except for user histories, which can retain responses for up to 6 months.

https://ai.google.dev/gemini-api/terms#grounding-with-google...

https://ai.google.dev/gemini-api/docs/grounding/search-sugge...

miohtama · 16h ago
Google obviously does not want to cannibalise their golden goose. However it's inevitable that Google search will start to suffer because people need it less and less with LLMs.
handfuloflight · 18h ago
So the price is just the $0.01 per query? Are they not charging for the tokens loaded into context from the various sources?
minimaxir · 18h ago
The query cost is in addition to tokens used. It is unclear if the tokens ingested from the search query count as addititional input tokens.

> Web search is available on the Anthropic API for $10 per 1,000 searches, plus standard token costs for search-generated content.

> Each web search counts as one use, regardless of the number of results returned. If an error occurs during web search, the web search will not be billed.

stephpang · 13h ago
Hi, stephanie from Anthropic here. Thanks for the feedback! We've updated the docs to hopefully make it a little more clear but yes search results do count towards input tokens

https://docs.anthropic.com/en/docs/build-with-claude/tool-us...

minimaxir · 7h ago
Thanks for the update!

> Web search results in the conversation are counted as input tokens on subsequent completion requests during the current turn or on subsequent conversation turns.

Yes, that's clear.

jarbus · 17h ago
Is search really that costly to run? $10/1000 searches seems really pricey. I'm wondering if these costs will come down in a few years.
jsnell · 16h ago
Yes.

The Bing Search API is priced at $15/1k queries in the cheapest tier, Brave API is $9 at the non-toy tier, Google's pricing for a general search API is unknown but their Search grounding in Gemini costs $35/1k queries.

Search API prices have been going up, not down, over time. The opposite of LLMs, which have gotten 1000x cheaper over the last two years.

jwr · 14h ago
> Google's pricing for a general search API

As I discovered recently, and much to my surprise, Google does not offer a "general search API", at least not officially.

There is a "custom search" API that sounds like web search, but isn't: it offers a subset of the index, which is not immediately apparent. Confusing and misleading labeling there.

Bing offers something a bit better, but I recently ended up trying the Kagi API, and it is the best thing I found so far. Expensive ($25/1000), but works well.

jsnell · 12h ago
There are multiple search engines known to be based on Google's API (Startpage, Leta, Kagi), so that product definitely exists. But it exciting that's all we know. They indeed do not publish anything about it. We don't know the price, the terms, or even the name.
formercoder · 14h ago
I work at Google but not on this. We do offer Gemini with Google Search grounding which is similar to a search API.
teeklp · 1h ago
How much do you pay people to use this?
QuadmasterXLII · 12h ago
??????
camkego · 8h ago
Do you have any references to the point that the Google Custom Search API is for a subset of the regular Google search index?
ricw · 3h ago
No reference here but found this out the hard way too. Google search Ali is Utterly useless in fact and entirely different search results vs using the web. Bing is better. Haven’t tried ksgi yet
ColinHayhurst · 9h ago
Excuse the self-promotion but Mojeek is £3/1,000: https://www.mojeek.com/services/search/web-search-api/
firtoz · 8h ago
> Can I store data obtained through the API? > You can store results on Business plan and optionally on the Enterprise plan. For other plans, you may store the results for 1 hour to enable caching.

Curious... I can understand that this may be a defensive action, however, feels unenforceable. And in some cases impractical for the user, after seeing this I may keep looking for alternatives for example because it's not clear to me if I have a chat history that has the search results in one of the messages, do I have to have a kind of mechanism to clean those out or something?

OxfordOutlander · 15h ago
Openai search mode is $30-50 per 1000 depending on low-high context

Gemini is $30/1000

So Anthropic is actually the cheapest.

For context, exa is $5 / 1000.

AznHisoka · 16h ago
If you want an unofficial API, most data providers usually charge $4/1000 queries or so. By unofficial, I mean they just scrape whats in Google and return that to you. So thats the benchmark I use, which means the cost here is around 2x that.

As far as I know, the pricing really hasnt gone down over the years. If anything it has gone up because Google is increasingly making it harder for these providers

Manouchehri · 15h ago
That seems expensive.

For 100 results per query, serper.dev is $2/1000 queries and Bright Data is $1.5/1000 queries.

jbellis · 14h ago
I'm not sure that's correct -- the first party APIs are priced per query but BD is per 1k results. Not immediately obvious what they count as a "result" tho.
Manouchehri · 2h ago
It's really poor wording. Bright Data does indeed consider 100 results in a single request to be a single billed "result" event, billed at $1.5/1000 requests.

I always set 100 results per request from Bright Data, and I can see my bill indeed says `SERP Requests: x reqs @ 1.5 $/CPM` (where `x` is the number of requests I've made, not x * 100).

https://docs.brightdata.com/scraping-automation/serp-api/faq...

For serper.dev, they consider 10 results to be 1 "credit", and 20 to 100 results to be 2 "credits". They bill at $50/50,000 credits, so it becomes $1/1000 requests if you are okay with just 10 results per request, or $2/1000 requests if you want 100 results per request.

(Both providers here scale pricing with larger volumes, just trying to compare the easiest price point for those getting started.)

AznHisoka · 14h ago
Sorry, got this off by a multiple. Yes, pricing is around that. So these “official” APIs are much more expensive.
tuyguntn · 17h ago
they will come down, because up until recently consumers were not paying directly for searches, with the LLM which has a cutoff date in the past and hallucinations, search got popular paid API.

Popularity will grow even more, hence competition will increase and prices will change eventually

AznHisoka · 16h ago
I dont think that will be true. What competition? Google, Bing, and.. Kagi? (And only one of those have a far superior index/algo than the others)
benjamoon · 19h ago
Good that it has an “allowed domain” list, makes it really useable. The OpenAI Responses api web search doesn’t let you limit domains currently so can’t make good use of it for client stuff.
omneity · 17h ago
Related: For those who want to build their own AI search for free and connect it to any model they want, I created a browser MCP that interfaces with major public search engines [0], a SERP MCP if you want, with support for multiple pages of results.

The rate limits of the upstream engines are fine for personal use, and the benefit is it uses the same browser you do, so results are customized to your search habits out-of-the-box (or you could use a blank browser profile).

0: https://herd.garden/trails/@omneity/serp

elisson22 · 5h ago
Regarding the costs, do we have a clear indication as to how much it costs to the company to perform tasks from a power consumption perspective? Or is it negligible?
simianwords · 10h ago
Can any one answer this question: are they using custom home made web index? Or are they using bing/google api?

Also I'm quite sure that they don't use vector embeddings for web search, its purely on text space. I think the same holds for all LLM web search tools. They all seem to work well -- maybe we don't need embeddings for RAG and grepping works well enough?

potlee · 17h ago
If you use your own search tool, you would have to pay for input tokens again every time the model decides to search. This would be a big discount if they only charging once for all output as output tokens but seems unclear from the blog post
stephpang · 13h ago
Thanks for the feedback, just updated our docs to hopefully make this a little clearer. Search results count towards input tokens on every subsequent iteration

https://docs.anthropic.com/en/docs/build-with-claude/tool-us...

potlee · 12h ago
Thanks for addressing it. Still sounds like a significant discount if only the search results and not all messages count are input tokens on subsequent iterations!
simonw · 18h ago
I couldn't see anything in the documentation about whether or not it's allowed to permanently store the results coming back from search.

Presumably this is using Brave under the hood, same as Claude's search feature via the Anthropic apps?

minimaxir · 18h ago
Given the context/use of encrypted_index and encrypted_context, I suspect search results are temporarily cached.
simonw · 18h ago
Right, but are there any restrictions on what I can do with them?

Google Gemini has some: https://ai.google.dev/gemini-api/docs/grounding/search-sugge...

OpenAI has some rules too: https://platform.openai.com/docs/guides/tools-web-search#out...

> "When displaying web results or information contained in web results to end users, inline citations must be made clearly visible and clickable in your user interface."

I'm used to search APIs coming with BIG sets of rules on how you can use the results. I'd be surprised but happy if Anthropic didn't have any.

The Brave Search API is a great example of this: https://brave.com/search/api/

They have a special, much more expensive tier called "Data w/ storage rights" which is $45 CPM, compared to $5 CPM for the tier that doesn't include those storage rights.

istjohn · 16h ago
Google's restrictions are outlandish: "[You] will not modify, or intersperse any other content with, the Grounded Results or Search Suggestions..."
minimaxir · 16h ago
The API response actually contains the full HTML to include.
istjohn · 14h ago
It just goes counter to the way I think about LLMs. It assumes end-products will merely be thin wrappers around an API, perhaps with some custom prompts. It's like thinking of the internet as a faster telegraph, instead of understanding that it's an entirely new paradigm. The most interesting applications of AI will use search as just one ingredient, one input, that will be sliced, diced, and pureed as it is combined with half a dozen other sources of information.

When your intelligent email client uses Gemini to identify the sender of an email as someone in the industry your B2B company serves, deciding to flag the email as important, where is that HTML supposed to go? Where does it go in a product that generates slide show lesson plans? What if I'm using it to generate audio or video? What if a digital assistant uses Gemini as a tool a few dozen times early in a complex 10,000 step workflow that was kicked off by me asking it to create three proposals for family vacations complete with a three 5-minute video presentations on each option? What if my product is helping candidates write tailored cover letters?

It's bad optics for a company just ruled to have acted illegally to maintain a monopoly in "general search services and general text advertising," but worse, it lacks imagination.

simonw · 16h ago
I'm not quite sure how I should handle that in my CLI tool!
aaronscott · 18h ago
It would be nice if the search provider could be configured. I would like to use this with Kagi.
lemming · 18h ago
I would really love this too. However I think that the only solution for that is to give it a Kagi search tool, in combination with a web scraping tool, and a loop while it figures out whether it's got the information it needs to answer the question.
lemming · 18h ago
I'm also interested to know if there are other limitations with this. Gemini, for example, has a built-in web search tool, but it can't be used in combination with other tools, which is a little annoying. o3/o4-mini can't use the search tool at all over the API, which is even more annoying.
metalrain · 11h ago
It's a good reminder that AI chats won't make web searches obsolete, just embed them at deeper in the stack.

Maybe Google search revenue moves from ads to more towards B2B deals for search API use.

zhyder · 17h ago
Now all the big 3 LLM providers provide web search grounding in their APIs, but how do they compare in ranking quality of the retrieved web search results? Anyone run benchmarks here?

Clearly web search ranking is hard after decades of content spam that's been SEO optimized (and we get to look forward to increasing AI spam dominating the web in the future). The best LLM provider in the future could be the one with just the best web search ranking, just like what allowed Google to initially win in search.

RainbowcityKun · 12h ago
Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.

The LLMs can access the web, but they can't yet understand it in a structured, evaluative way.

What’s missing is a layer of engineered relevance modeling, capable of filtering not just based on keywords or citations, but on deeper truth alignment and human utility.

And yes, as you mentioned, we may even see the rise of LLM-targeted SEO—content optimized not for human readers, but to game LLM attention and summarization heuristics. That's a whole new arms race.

The next leap won’t be about just accessing more data, but about curating and interpreting it meaningfully.

simianwords · 10h ago
>Right now, most LLMs with web search grounding are still in Stage 1: they can retrieve content, but their ability to assess quality, trustworthiness, and semantic ranking is still very limited.

Why do you think it is limited? Imagine you show a link with details to an LLM and ask it if it is trustworthy or high quality w.r.t the query, why can't it answer it?

lgiordano_notte · 4h ago
Don't think the limit is in what LLMs can evaluate - given the right context, they’re good at assessing quality. The problem is what actually gets retrieved and surfaced in the first place. If the upstream search doesn’t rank high-quality or relevant material well, LLM never sees it. It's not a judgment problem, more of a selection problem.
RainbowcityKun · 8h ago
What I mean is that more powerful engineering capabilities are needed to provide LLM with processing of search results.
simianwords · 8h ago
Not sure I understand -- LLM's are pretty good at assessing quality of search results. If an LLM can bulk assess a bunch of results it can get a pretty far, probably more efficient than a human hand checking all the results.