> Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.
What it’s for
“History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:
^ response from chatgpt
jbellis · 3h ago
TIL that Chrome ships an internal embedding model, interesting!
It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.
Agreed! On open source though - can't you just pull the model and use the weights? I confess I have no idea what the licensing would be for an open source-backed browser deploying weights, but it seems like unless you made a huge amount of money off it, it would be unproblematic, and even then could be just fine.
What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:
^ response from chatgpt
It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.
(I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)