Native Sparse Attention

86 CalmStorm 10 8/1/2025, 7:48:06 PM aclanthology.org ↗
Was submitted as "DeepSeek won the best paper award at ACL 2025"

Here is the awards page: https://cspaper.org/topic/116/record-breaking-acl-2025-crown...

Comments (10)

noosphr · 1h ago
Deep seek papers are a must to read for anyone who wants to understand how to make LLMs operate at hyper scale. All western labs hide their best results, or at most release summaries that are about as meaningful as the answers Cleo used to give on stack exchange: https://math.stackexchange.com/questions/562694/integral-int...

I have a suspicion with how quiet all the major players got after the two weeks after deepseek R1 was released that they were reading and implementing everything in the papers that came with it as fast as humanly possible.

Art9681 · 8m ago
None of the major players have ever been quiet. DeepSeek enjoyed about a week or two's worth of press before its spotlight was stolent from the next great model. It never held the top spot, ever, mind you. So I don't understand why you think major players had to say anything about it, when the model was neither first, second or third in real world capability, and why they would have to say anything about it when DeepSeek service processes maybe an 1/8 of what OpenAI, Google or Claude in any given span of time.

I applaud their open efforts. But being "altruistic" and being best are two different things.

sabakhoj · 3h ago
> Despite being sparse, NSA surpasses Full Attention baseline on average across general benchmarks, long-context tasks, and reasoning evaluation.

Isn't it very notable that the latency improvement didn't have a performance loss? I'm not super familiar with all the technical aspects, but that seems like it should be one of the main focuses of the paper.

pyuser583 · 2h ago
I'd say award for best title is a tie between: "Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text Generation Systems"; "Finding Needles in Images: Can Multi-modal LLMs Locate Fine Details?"; and "Steering off Course: Reliability Challenges in Steering Language Models."
CalmStorm · 7h ago
For the first time, it introduced native sparse attention into the full training process, achieving up to 11× inference speedup while maintaining model performance.
israrkhan · 1h ago
Well deserved
gnabgib · 3h ago
Title: Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

The awards page for ACL seems to disagree with this editorialized title: https://2025.aclweb.org/program/awards/

fourdnet · 2h ago
The ACL webpage has not been updated yet. Here are the announcement slides: https://cspaper.org/topic/116/record-breaking-acl-2025-crown...
aspenmayer · 1h ago
The page that the person you’re replying to does have this so it may not be updated, or they were looking in the wrong place originally, or both:

> Industry Track Awards

> Best Paper

> Speed Without Sacrifice: Fine-Tuning Language Models with Medusa and Knowledge Distillation in Travel Applications

> Daniel Zagyva, Emmanouil Stergiadis, Laurens van der Maas, Aleksandra Dokic, Eran Fainman, Ilya Gusev, Moran Beladev

Per TFA, the paper we’re looking for is this one:

> Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

> Jingyang Yuan, Huazuo Gao, Damai Dai, Junyu Luo, Liang Zhao, Zhengyan Zhang, Zhenda Xie, Y. X. Wei, Lean Wang, Zhiping Xiao, Yuqing Wang, Chong Ruan, Ming Zhang, Wenfeng Liang, Wangding Zeng

I’m not finding it by author on the page you linked but I think it’s this reference by title:

> DeepSeek × PKU × UW — Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention

I did find it on this page:

https://2025.aclweb.org/program/main_papers/

ninjin · 1h ago
Link to the published paper rather than the preprint (update link?):

https://aclanthology.org/2025.acl-long.1126