Top model scores may be skewed by Git history leaks in SWE-bench

262 mustaphah 82 9/11/2025, 6:32:23 PM github.com ↗

Comments (82)

ofirpress · 3h ago
[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

comex · 2h ago
The comment you link to says that "we only performed a quick preliminary search" and "We do not have a method for automatically checking existing trajectories." In other words, it can't confirm that the issue only "affected a tiny fraction of existing agents in a tiny fraction of their runs" as you say. Are you saying that you have since separately confirmed this?

Edit: That said, I’m willing to believe based on the information in the thread that this most likely only affects a tiny fraction of runs.

typpilol · 2h ago
Ya what he links directly contradicts what he's saying lol

No comments yet

bflesch · 1h ago
> This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them.

You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case. It's like building a chroot and then allowing `cd ..` to break out of it. What other maybe extremely basic edge cases were missed?

> This doesn't change the overall picture or trends at all.

Outsider without financial benefits from the current AI hype might have a different picture. And I'm a bit fed up about AI with fake productivity promises enshittifying nearly all user-facing software that my clients and I are using, bundled with hefty price hikes of Microsoft and the likes in order to pay for their "investments".

cjsaltlake · 1h ago
I'm also on the SWE-bench team. This was simply a classic bug. We had code before that we believed was sufficient to hide / remove future GitHub history and it turns out it was not. We've patched it.
mustaphah · 1h ago
> You're all extremely clever and I can't seem to understand how you missed thinking about such a simple edge case [...]

I wouldn't be surprised if they left this loophole on purpose to give some (their?) agents extra leverage.

Edit #1: I didn't mean to imply bad intent; just thinking out loud.

Edit #2: Please, downvote responsibly. I deserve every one. https://www.youtube.com/watch?v=0FHEeG_uq5Y

gchamonlive · 39m ago
> I didn't mean to imply bad intent

> I wouldn't be surprised if they left this loophole on purpose

You didn't imply bad intent, you outright suggested it.

coldtea · 11m ago
He means he doesn't say it was necessarily bad intent, but mentions it as a possibility ("thinking out loud").
mustaphah · 26m ago
I could've phrased it better.
gchamonlive · 15m ago
You could rewrite it a 1000 times, if the underlying idea is the same, suggesting something you don't know it's true, the outcome would be the same. Or did you mean something else? What was your intention with the message?
cjsaltlake · 1h ago
We absolutely did not.
coldtea · 11m ago
Of course that's what a team that did it on purpose would also say :)
franktankbank · 1h ago
#tiny
segmondy · 2h ago
reward hacking is a thing and is also a hint of the models intelligent. We will fix this one, and the models will find a different way to reward hack in the future. "Cheating" is a sign of intelligence
bflesch · 1h ago
I love the "cheating is a sign of intelligence" sound bite you provided. When AI engineers cheat we should applaud their intelligence and their lack of ethics.

"Cheating (biology), a metaphor used in behavioral ecology to describe organisms that receive a benefit at the cost of other organisms" [1]

Whole planet gets their Microsoft license fees jacked up so Microsoft can pay OpenAI who in turn pays NVIDIA, and nontechnical decision makers slurping up the faked benchmarks and AI promises.

[1] https://en.wikipedia.org/wiki/Cheating_(disambiguation)

giveita · 44m ago
Is it wrong? Aren't ethics and intelligence two different axes?
coldtea · 10m ago
Different, but probably not as orthogonal as one might think.

E.g. cooperating ethics had been necessary for the further development of human populations intelligence (and culture, technology, material wealth, nutrition etc that lead to further increases in intelligence).

So lack of ethics might be a sign of intelligence, but it's also a parasitic intelligence that benefits the individual, and beyond certain level and spread to the detriment of the further evolutionary development of the species.

piskov · 4h ago
Not “may be”: just look how swe-bench scores drop to single digits once it in C#

https://arxiv.org/html/2506.12286v3

fine_tune · 4h ago
I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

So kinda neat to see this paper!

[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

CuriouslyC · 2h ago
The big labs are almost certainly using compiler/repl output for generated code as an oracle for RL. I doubt they have C# in the mix.
tomjakubowski · 1h ago
Why do you doubt that? It's a widely used language. And there is even an open source C# REPL.
CuriouslyC · 55m ago
Because RL time is expensive and I don't think the languages which are more popular than C# have such high performance that it's worth bumping their batches for C#.
yieldcrv · 3h ago
5th most used language based on private repos that the group making the report has the exclusive direct access to seeing

I don't see that contradicting your assumption

BoorishBears · 2h ago
"In this year’s Octoverse report, we study how public and open source activity on GitHub..."
stefan_ · 4h ago
So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

yorwba · 3h ago
The "Verified" part of "SWE-Bench Verified" means that there was plain "SWE-Bench" before it, which had actually not been verified at all and included a lot of tasks that didn't really make sense for use as a benchmark: https://openai.com/index/introducing-swe-bench-verified/#ada...

Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.

jsheard · 4h ago
> So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.

geekymartian · 3h ago
that was my exact thought. how fitting
sebzim4500 · 3h ago
The verified refers to the fact that the benchmark problems were verified by human experts to be reasonable.

It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.

blibble · 2h ago
> I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing?

I doubt any of the AI company employees are encouraged to go looking for cheating

teaearlgraycold · 3h ago
Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.
rockwotj · 4m ago
A friend is starting a company to do evals by just pitting models agent each other in simulations. Their teaser video is good (and humorous!)

https://kradle.ai/

slacktivism123 · 3h ago
Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

Workaccount2 · 2h ago
The best benchmark is the community vibe in the weeks following a release.

Claude benchmarks poorly but vibes well. Gemini benchmarks well and vibes well. Grok benchmarks well but vibes poorly.

(yes I know you are gushing with anecdotes, the vibes are simply the approximate color of gray born from the countless black and white remarks.)

wubrr · 1h ago
the vibes are just a collection anecdotes
k__ · 2h ago
Yes, often you see huge gains in some benchmark, then the model is ran through Aider's polyglot benchmark and doesn't even hit 60%.
mustaphah · 4h ago
I speculate something similar (or even worse) is going on with Terminal-Bench [1].

Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

[1] https://www.tbench.ai/leaderboard

Bolwin · 1h ago
They're all using claude so idk. Claude code is just a program, the magic is mainly in the model
cma · 1h ago
Claude code was severely degraded the last few weeks, very simple terminal prompts were failing for me that it never had problems with.
giveita · 40m ago
Follow the money. Or how much comes from your pocket vs. VC and big tech speculators.
bryan0 · 35m ago
hah the model should get extra credit for discovering this!

> Now I understand the situation perfectly! The issue described in the problem statement is a real bug that was already identified and fixed in later versions of pytest. Since we're working with pytest 5.2.4, we need to apply the same fix.

https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Aperocky · 2h ago
epochs ago when random forest was part of machine learning nomenclature, we had a strong claim from an adjacent team in the form of a powerpoint circulated upwards that they had achieved almost perfect prediction accuracy.

We relatively quickly identified that the testing set are taken directly from the training set, but the claim has been advertised already so they were more difficult to retract... if it were at all, I left shortly after.

The incentives are not aligned with accurate reporting.

mbowcut2 · 3h ago
I'm not surprised. People really thought the models just kept getting better and better?
segmondy · 2h ago
The models are getting better and better.
giveita · 39m ago
That's expected. No one will release a worse model.
sodality2 · 15m ago
Not a cheaper one, or better in some ways, or lower latency, etc?
guerrilla · 3h ago
Maybe. How would I know?
jMyles · 2h ago
...even if the agent did "cheat", I think that having the capacity to figure out that it was being evaluated, find the repo containing the logic of that evaluation, and find the expected solution to the problem it faced... is "better" than anything that the models were able to do a couple years ago.
jasonjmcghee · 4h ago
Very interested to see the updated results. This could really shake up the leaderboard.
macawfish · 4h ago
I hope it does. These coding benchmarks have often seemed frustratingly out of touch with my experience.
3abiton · 3h ago
Because I would argue there is no benchmark to rule them all. It highly depends on individual use cases.
typpilol · 2h ago
The agentic ones seem better. Typescript is like at 25% last I saw on the models. Python was higher.

That seems more accurate than the huge scores the other ones get

ripped_britches · 5m ago
Everyone on HN is like “yes I knew it! I was so right in 2021 that LLMs were just stochastic parrots!”

Strangely one of the most predictable groups of people

pseudosavant · 1h ago
If I was doing those tasks, and I found that someone had already fixed it in a future (from my git state) commit, I'd think I was being pretty smart to use that solution too.

Turns out the test shouldn't have the answers included in it?

zaptheimpaler · 3h ago
It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.
dolmen · 2h ago
Next models will use zero-day to escape the sandbox and access the answer.
Nijikokun · 2h ago
There was a lot of speculation whether or not the model would use them or even if it would attempt to use them and they noted this months ago. Now they have clear evidence of them doing so. Seems reasonable.
epolanski · 3h ago
This is beyond sad and shameful.
falcor84 · 27m ago
If you believe that you can develop a benchmark that wouldn't have any issues, please do so.
OtherShrezzing · 3h ago
That the answers have been available to them in the environment, and they’re still not hitting 100% on this benchmark is a damning indictment of SOTA model performance.
raincole · 2h ago
It really isn't. Do you expect SOTA models to answer any answered question on the internet with 100% accuracy? Congrats you just compressed the whole internet (at least a few zettabytes) into a model (a few TB at most?).
OtherShrezzing · 2h ago
The linked ticket isn’t suggesting the commit is in the training data. It’s demonstrating that models run ‘git log’, find the exact code to fix the issue against which they’ll be scored, and then they implement that code as-is.

The test environment contains the answers to the questions.

aurareturn · 3h ago
Are you going to rail on humans for making this mistake in the first place?
themafia · 2h ago
No because that's the baseline. It's what you do when you have no other choice. Railing against that would be pointless.
ares623 · 2h ago
i mean, if a human was claiming they could do that and successfully received billions to attempt to do it, and fail to deliver, i'd be railing against that particular human too
Traster · 3h ago
Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.
Noumenon72 · 3h ago
Someone did comment that it's actually smart to check if something is fixed on the unstable branch, or I suppose in your coworkers' branches. A good task for an LLM.
falcor84 · 19m ago
Oh, you haven't been using `git fetch-future-solution`?
belter · 3h ago
In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...
candiddevmike · 3h ago
I think Oracle's stock mostly popped due to a delayed reaction with the US GSA contract it secured in July and the revenue guidance probably related to it:

https://www.oracle.com/news/announcement/blog/oracle-cloud-c...

belter · 2h ago
Lol...That contract has Oracle offering licenses at a discount of 75% and is estimated to make them not more than one 1 Billion. The other big contract on Cloud services the DoD JWCC is $8B to 9B but shared by four vendors (AWS, Microsoft, Google, Oracle) and Oracle orders under it are in the hundreds of millions not even 1 Billion...

Wall Street is currently heavily punishing any company who misses their quarter, including NVIDIA!, after beating on their quarter.

Oracle had a earnings miss in the current quarter!

Their current REALITY is ~$15B quarterly revenue (with cloud infra ~$3B) and only ~$12B in near-term deferred backlog and deferred backlog is NOT revenue. To justify the valuation, this would imply OCI going from ~$18B in FY26 to ~$140B by FY30 that is an insane promise of +$120B in 4 years but back-loaded into the year 3 or year 4. :-))

Capex needs ~$35B next year just to chase GPUs/power and if they miss one quarter the story implodes. The supposed rational, efficient market, is paying near $1T today for back-loaded hopes.

Is completely bubble math. Like anybody, including Oracle AND their Customers, have ANY idea of their Capex in 4 years.

Complete and total bubble.

Zacharias030 · 32m ago
Thanks for that! where can I find your writing?
ksherlock · 3h ago
The real bubble will come once interest rates start dropping.
jgalt212 · 2h ago
Baseball players cheat for tens of millions. The stakes are 2-4 orders of magnitude higher here. I'm not surprised in the least.
jMyles · 2h ago
Regardless of whether, during this particular evaluation, Claude 4 Sonnet looked at the solution to this particular problem in this particular git repo, this seems like a long-term intractable problem.

How can we ever perform this sort of faux-neutral agentic evaluation in an environment where we want agents to have access to the sum total of knowledge (which will necessarily include being able to learn about the evaluation being conducted and its expectations)?