From Multi-Head to Latent Attention: The Evolution of Attention Mechanisms

131 mgninad 35 8/30/2025, 5:45:24 AM vinithavn.medium.com ↗

Comments (35)

attogram · 11h ago
"Attention Is All You Need" - I've always wondered if the authors of that paper used such a casual and catchy title because they knew it would be groundbreaking and massively cited in the future....
cgearhart · 4h ago
The title is a succinct snippet that spoke directly to researchers at the time. The transformer architecture is somewhat obvious (especially in retrospect), but it was still very surprising because no one was really going this direction. They were going many other directions… That’s the point of the title: you don’t need all kinds of complicated systems for NLP to work—“attention is all you need”.

After the success of transfer learning for computer vision in the mid-2010s, it was obvious that NLP needed its own transfer learning approach and AlexNet moment.

Lots of research focus around that time was on recurrent models—because that was the conventional wisdom about how you model sequences. Markov chains had led to vanilla RNNs, LSTMs, GRU, etc., which all seemed tantalizingly promising. (MAMBA fans take note.) Attention mechanisms were even used in recurrent models…but so was everything else.

Then came transformers—mixing all the then-best-practice bits with the heretical idea of just not giving a shit about O(n^2) complexity. The vanilla transformer used an encoder-decoder structure like the best translation models had been doing; it used a stack of identical blocks to nudge the output along through the pipe like ResNet; it was pretrained on a multi-task objective using a large document corpus. But then it jettisoned all the other complexity and just let it all rest on the attention mechanism to capture long range dependencies.

It was immediately thrilling, but it was also completely impractical. (I think the largest model had a 500ish token context limit and bigger than hobbyist GPUs.) So it mostly sat on a shelf while people used other “good enough” models for a few years until the hardware got better and a couple folks proved that it could actually work to run these things at massive scale.

And now here we are.

I think they knew what they were saying at the time, but I don’t think they knew that it would remain true for years.

danieldk · 4h ago
Lots of research focus around that time was on recurrent models—because that was the conventional wisdom about how you model sequences. Markov chains had led to vanilla RNNs, LSTMs, GRU, etc., which all seemed tantalizingly promising. (MAMBA fans take note.) Attention mechanisms were even used in recurrent models…but so was everything else.

I feel like there is a step missing here...

People were using RNN encoders/decoders for machine translation - the encoder was used to make a representation (fixed-size vector) of the source language sentence, the decoder generated the target language sentence from the source representation.

The issue that people were bumping into is that the fixed-sized vector bottlenecked the encoder/decoder architecture. Representing a variable-length source sentence as a fixed-size vector leads to a loss of information that increases with the source sentence length.

People started adding attention to the decoder as a way to work around this issue. Each decoder step could attend to every token (well, RNN hidden representation) of the source sentence. So, this led to the RNN + attention architecture.

The title 'Attention is all you need' comes from the realization that in this architecture the RNN is not needed, neither for the encoder and decoder. It's a message to the field who was using RNNs + attention (to avoid the bottleneck). Of course, the rest was born from that, encoder-only transformer models like BERT and decoder-only models like current LLMs.

cgearhart · 3h ago
This is a fair point and clarification. :-)
sivm · 8h ago
Attention is all you need for what we have. But attention is a local heuristic. We have brittle coherence and no global state. I believe we need a paradigm shift in architecture to move forward.
ACCount37 · 6h ago
Plenty of "we need a paradigm shift in architecture" going around - and no actual architecture that would beat transformers at their strengths as far as eye can see.

I remain highly skeptical. I doubt that transformers are the best architecture possible, but they set a high bar. And it sure seems like people who keep making the suggestion that "transformers aren't the future" aren't good enough to actually clear that bar.

airstrike · 5h ago
That logic does not hold.

Being able to provide an immediate replacement is not a requirement to point out limitations in current technology.

ACCount37 · 4h ago
What's the value of "pointing out limitations" if this completely fails to drive any improvements?

If any midwit can say "X is deeply flawed" but no one can put together an Y that would beat X, then clearly, pointing out the flaws was never the bottleneck at all.

airstrike · 1h ago
I think you don't understand how primary research works. Pointing out flaws helps others think about those flaws.

It's not a linear process so I'm not sure the "bottleneck" analogy holds here.

We're not limited to only talking about "the bottleneck". I think the argument is more that we're very close to optimal results for the current approach/architecture, so getting superior outcomes from AI will actually require meaningfully different approaches.

scragz · 2h ago
what ever happened to Google's Titans?
radarsat1 · 5h ago
To be fair it would be a lot easier to iterate on ideas if a single experiment didn't cost thousands of dollars and require such massive data. Things have really gotten to the point that it's just not easy for outsiders to contribute if you're not part of a big company or university, and even then you have to justify the expenditure (risk). Paradigm shifts are hard to come by when there is so much momentum in one direction and trying something different carries significant barriers.
treyd · 7h ago
Has there been research into some hierarchical attention model that has local attention at the scale of sentences and paragraphs that feeds embeddings up to longer range attention across documents?
mxkopy · 6h ago
There’s the hierarchical reasoning model https://arxiv.org/abs/2506.21734 but it’s very new and largely untested

Though honestly I don’t think new neural network architectures are going to get us over this local maximum, I think the next steps forward involve something that’s

1. Non lossy

2. Readily interpretable

miven · 6h ago
The ARC Prize Foundation ran extensive ablations on HRM for their slew of reasoning tasks and noted that the "hierarchical" part of their architecture is not much more impactful than a vanilla transformer of the same size with no extra hyperparameter tuning:

https://arcprize.org/blog/hrm-analysis#analyzing-hrms-contri...

ACCount37 · 5h ago
By now, I seriously doubt any "readily interpretable" claims.

Nothing about human brain is "readily interpretable", and artificial neural networks - which, unlike brains, can be instrumented and experimented on easily - tend to resist interpretation nonetheless.

If there was an easy to reduce ML to "readily interpretable" representations, someone would have done so already. If there were architectures that perform similarly but are orders of magnitude more interpretable, they will be used, because interpretability is desirable. Instead, we get what we get.

Mallowram · 1h ago
Yes, it's discarding attention in favor or removing the already false state of attention embedded in every arbitrary unit (word, image, sentence). Attention itself is a false reduction. Attention of attention is an invitation to total illusions of events.
ruuda · 5h ago
What about the converse, the paper became some massively influential because of the catchy title? Of course the contents are groundbreaking, but that alone is not enough. A groundbreaking paper that nobody knows about cannot have any impact. Even for research, there is a marketing part to it.
soulofmischief · 4h ago
The paper became massively influential because of its contents, not its catchy title. Scientists do not generally read a paper because if its title, they check the abstract and go from there.
eldenring · 4h ago
Huh? of course its enough. Transformers immediately started destroying every single baseline out there. The authors definitely knew it was a very significant discovery beforehand.
Mallowram · 2h ago
Attention is a false reduction, LLMs are using arbitrary units like words, images and sentences that come embedded with intentional states that are post hoc retrofits to real events. Attention has already been falsely coded by the arbitrary representation in a word or image. It's done the false coding, it comes embedded with the mistaken bias. How come coders never deciphered this very obvious illusory artifact in each word?

This is like saying we're going to weave carpets from already woven carpets. Attention wasn't the core or key to the problem, it was finding out a way to disentangle the embedded intents and discover what's really happening in the arbitrary.

The results might work as a magic trick for a bit, but they unravel eventually. The whole shebang is a magic act posing as inference, learning, intelligence, reasoning. None of these things are really happening because they don't do to the source or are about the actual behavior, it's a short-cut hack.

adastra22 · 10h ago
Definitely. I always assumed that, having been involved in writing similarly groundbreaking papers… or so we thought at the time. All my coauthors spent significant time thinking about what the best title would be, and strategies like that were common. (It ended up not mattering for us.)
lucidrains · 5h ago
It is a reference to the beatles song, mainly because Noam Shazeer is a music lover
iLoveOncall · 8h ago
I recommend reading this article which explains how you can get your papers accepted, and explains that a catchy title is the #1 most important thing: https://maxwellforbes.com/posts/how-to-get-a-paper-accepted/ (not a plug, I just saved it because it was interesting)
hyperbovine · 8h ago
It sounds like a typical neurips paper to me. And no, they did know what a big deal it would be, else google never would have given the idea away.
JSR_FDED · 11h ago
Any way to read this without making an account?
kuidaumpf · 11h ago
qcnguy · 11h ago
Just click the x at the top right of the interstitial?
iLoveOncall · 8h ago
That only work for a few articles per month. But usually opening in incognito does the trick.
djoldman · 7h ago
just turn off JS.
Mallowram · 2h ago
Attention is not fundamental to survival or cognition. It's developing action-routines in neural-syntax from noticing subtle shifts, differences. Imagination is in the unconscious wordless uploading of sense, memory, emotion into a blackboard where time is excluded and spatial is played. These post-hoc computations of attention are inherently biased to enforced noticing. Notice how passive the data-stimulus is, as if it's just there to be paid attention to.

https://pubmed.ncbi.nlm.nih.gov/31489566/

mrtesthah · 12h ago
Do we know if any of these techniques are actually used in the so-called "frontier" models?
zackangelo · 1h ago
Not quite a frontier model but definitely built by a frontier lab: Grok 2 was recently open sourced and I believe it uses a fairly standard MHA architecture with MoE.
gchadwick · 9h ago
Who knows what the closed source models use but certainly going by what's happening in open models all the big changes and corresponding gains in capability are in training techniques not model architecture. Things like GQA and MLA as discussed in this article are important techniques for getting better scaling but are relatively minor tweak vs the evolution in training techniques.

I suspect closed models aren't doing anything too radically different from what's presented here.

vinithavn01 · 12h ago
The model names are mentioned under each type of attention mechanism