Context Rot: How increasing input tokens impacts LLM performance
86 kellyhongsn 14 7/14/2025, 7:25:15 PM research.trychroma.com ↗
I work on research at Chroma, and I just published our latest technical report on context rot.
TLDR: Model performance is non-uniform across context lengths, including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.
This highlights the need for context engineering. Whether relevant information is present in a model’s context is not all that matters; what matters more is how that information is presented.
Here is the complete open-source codebase to replicate our results: https://github.com/chroma-core/context-rot
Especially with Gemini Pro when providing long form textual references, providing many documents in a single context windows gives worse answers than having it summarize documents first, ask a question about the summary only, then provide the full text of the sub-documents on request (rag style or just simple agent loop).
Similarly I've personally noticed that Claude Code with Opus or Sonnet gets worse the more compactions happen, it's unclear to me whether it's just the summary gets worse, or if its the context window having a higher percentage of less relevant data, but even clearing the context and asking it to re-read the relevant files (even if they were mentioned and summarized in the compaction) gives better results.
Long story short: Context engineering is still king, RAG is not dead
LLMs will need RAG one way or another, you can hide it from the user, but it still must be there.
Thats 99% of coders. No need to gatekeep.
It's actually even more significant than it's possible to benchmark easily (though I'm glad this paper has done so.)
Truly useful LLM applications live at the boundaries of what the model can do. That is, attending to some aspect of the context that might be several logical "hops" away from the actual question or task.
I suspect that the context rot problem gets much worse for these more complex tasks... in fact, exponentially so for each logical "hop" which is required to answer successfully. Each hop compounds the "attention difficulty" which is increased by long/distracting contexts.
I've noticed this issue as well with smaller local models that have relatively long contexts, say a 8B model with 128k context.
I imagined they performed special recall training for these long context models, but the results seem... not so great.
Media literacy disclaimer: Chroma is a vectorDB company.