The Limits of Sprawl – Paul Krugman (paulkrugman.substack.com)
1 points by rbanffy 5m ago 0 comments
Reaction to the PlanetScale PostgreSQL Benchmarks (xata.io)
1 points by PaulHoule 26m ago 0 comments
LLM Embeddings Explained: A Visual and Intuitive Guide
196 eric-burel 28 7/28/2025, 7:02:14 AM huggingface.co ↗
(However, there seems to be some serious back-button / browser history hijacking on this page.. Just scolling down the page appends a ton to my browser history, which is lame.)
So someone, at some point, thought this was a feature
Encoders like BERT produce better results for embeddings because they look at the whole sentence, while GPTs look from left to right:
Imagine you're trying to understand the meaning of a word in a sentence, and you can read the entire sentence before deciding what that word means. For example, in "The bank was steep and muddy," you can see "steep and muddy" at the end, which tells you "bank" means the side of a river (aka riverbank), not a financial institution. BERT works this way - it looks at all the words around a target word (both before and after) to understand its meaning.
Now imagine you have to understand each word as you read from left to right, but you're not allowed to peek ahead. So when you encounter "The bank was..." you have to decide what "bank" means based only on "The" - you can't see the helpful clues that come later. GPT models work this way because they're designed to generate text one word at a time, predicting what comes next based only on what they've seen so far.
Here is a link also from huggingface, about modernBERT which has more info: https://huggingface.co/blog/modernbert
Also worth a look: neoBERT https://huggingface.co/papers/2502.19587
Encoder-decoders are not in vogue.
Encoders are favored for classification, extraction (eg, NER and extractive QA) and information retrieval.
Decoders are favored for text generation, summarization and translation.
Recent research (see, eg, the Ettin paper: https://arxiv.org/html/2507.11412v1 ) seems to confirm the previous understanding that encoders are indeed better for “encoder task” and vice-versa.
Fundamentally, both are transformers and so an encoder could be turned into a decoder or a decoder could be turned into an encoder.
The design difference comes down to bidirectional (ie, all tokens can attend to all other tokens) versus autoregressive attention (ie, the current token can only attend to the previous tokens).
https://app.vidyaarthi.ai/ai-tutor?session_id=C2Wr46JFIqslX7...
Our goal is to make abstract concepts more intuitive and interactive — kind of like a "learning-by-doing" approach. Would love feedback from folks here.
(Not trying to self-promote — just sharing a related learning tool we’ve put a lot of thought into.)
Lots of console errors with the likes of "Content-Security-Policy: The page’s settings blocked an inline style (style-src-elem) from being applied because it violates the following directive: “style-src 'self'”." etc...
https://news.ycombinator.com/newsguidelines.html
>If your ears are more important than your eyes, you can listen to the podcast version of this article generated by NotebookLM.
It looks like an LLM would read it to you; I wonder if one could have made it mobile-friendly.
If I understand this correctly, there are three major problems with LLMs right now.
1. LLMs reduce a very high-dimensional vector space into a very low-dimensional vector space. Since we don't know what the dimensions in the low-dimensional vector space mean, we can only check that the outputs are correct most of the time.
What research is happening to resolve this?
2. LLMs use written texts to facilitate this reduction. So, they don't learn from reality, but from what humans written down about reality.
It seems like Keen Technologies tries to avoid this issue, by using (simple) robots with sensors for training, instead of human text. Which seems a much slower process, but could yield more accurate models in the long run.
3. LLMs holds internal state as a vector that reflects the meaning and context of the "conversation". Which explains, why the quality of responses deteriorates with longer conversations, if one vector is "stamped over" again and again, the meaning of the first "stamps" will get blurred.
Are there alternative ways of holding state or is the only way around this to back up that state vector at every point an revert if things go awry?
(1) While studying the properties of the mathematical objects produced is important, I don't think we should understand the situation you describe as a problem to be solved. In old supervised machine learning methods, human beings were tasked with defining the rather crude 'features' of relevance in a data/object domain, so each dimension had some intuitive significance (often binary 'is tall', 'is blue' etc). The question now is really about learning the objective geometry of meaning, so the dimensions of the resultant vector don't exactly have to be 'meaningful' in the same way -- and, counter-intuitive as it may seem, this is progress. Now the question is of the necessary dimensionality of the mathematical space in which semantic relations can be preserved -- and meaning /is/ in some fundamental sense the resultant geometry.
(2) This is where the 'Platonic hypothesis' research [1] is so fascinating: empirically we have found that the learned structures from text and image converge. This isn't saying we don't need images and sensor robots, but it appears we get the best results when training across modalities (language and image, for example). This is really fascinating for how we understand language. While any particular text might get things wrong, the language that human beings have developed over however many thousands of years really does seem to do a good job of breaking out the relevant possible 'features' of experience. The convergence of models trained from language and image suggests a certain convergence between what is learnable from sensory experience of the world and the relations that human beings have slowly come to know through the relations between words.
[1] https://phillipi.github.io/prh/ and https://arxiv.org/pdf/2405.07987
Tokens are a form of compression, and working on uncompressed representation would require more memory and more processing power.
First, the embedding typically uses thousands of dimensions.
Then, the value along each dimension is represented with a floating point number which will take 16 bits (can be smaller though with higher quantization).
My point was that you compared how the LLM represents a token internally versus how “English” transmits a word. That’s a category error.