Chrome's New Embedding Model: Smaller, Faster, Same Quality

29 kaycebasques 4 5/13/2025, 2:39:58 PM dejan.ai ↗

Comments (4)

jbellis · 5h ago
TIL that Chrome ships an internal embedding model, interesting!

It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.

(I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)

vessenes · 5h ago
Agreed! On open source though - can't you just pull the model and use the weights? I confess I have no idea what the licensing would be for an open source-backed browser deploying weights, but it seems like unless you made a huge amount of money off it, it would be unproblematic, and even then could be just fine.
darepublic · 4h ago
> Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.

What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:

^ response from chatgpt

pants2 · 4h ago
What does Chrome use embeddings for?