Top model scores may be skewed by Git history leaks in SWE-bench

68 mustaphah 13 9/11/2025, 6:32:23 PM github.com ↗

Comments (13)

piskov · 37m ago
Not “may be”: just look how swe-bench scores drop to single digits once it in C#

https://arxiv.org/html/2506.12286v3

fine_tune · 26m ago
I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

So kinda neat to see this paper!

[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

stefan_ · 27m ago
So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

jsheard · 20m ago
> So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.

teaearlgraycold · 14m ago
Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.
mustaphah · 24m ago
I speculate something similar (or even worse) is going on with Terminal-Bench [1].

Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

[1] https://www.tbench.ai/leaderboard

slacktivism123 · 10m ago
Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!". Proper research means interrogating the traces, like these researchers did: https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

mbowcut2 · 4m ago
I'm not surprised. People really thought the models just kept getting better and better?
jasonjmcghee · 35m ago
Very interested to see the updated results. This could really shake up the leaderboard.
macawfish · 30m ago
I hope it does. These coding benchmarks have often seemed frustratingly out of touch with my experience.
zaptheimpaler · 14m ago
It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.
belter · 11m ago
In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...
Traster · 5m ago
Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.