Show HN: Randomly switching between LMs at every step boosts SWE-bench score

5 lieret 1 8/20/2025, 3:09:32 PM swebench.com ↗
What if your agent uses a different LM at every turn? We let mini-SWE-agent randomly switch between GPT-5 and Sonnet 4 and it scored higher on SWE-bench than with either model separately.

GPT-5 by itself gets 65.0%, Sonnet 4 64.8%, but randomly switching at every step gets us 67.2%

This result came pretty surprising to us. There's a few more experiments in the blog post.

Comments (1)

NitpickLawyer · 1h ago
This is really cool! And even cooler that it's tested on their mini agent harness (only has access to "terminal", no other tools) because this implies it's "raw model power" rather than "software glue".

My speculation: this is an "emergent" capability out of good / scalable / "solved" RL. Both Anthropic and oAI seem to have made huge advances in RL. (xAI as well, but haven't yet released their coding model, so we'll see if that continues). In contrast to other RLd models out there (i.e. the deepseeks, the qwens, etc) that score really well on tasks similar to those in benchmarks, both claude4 and gpt5 seem to have "learned" what agentic means at a different level. They can be guided through tasks, asked to do one particular subpart of a task, or a particular approach, etc. And they do it well. The other implementations feel "stubborn". Can't explain it better.

It will be interesting to see what Gemini3 will bring about. goog / deepmind are experts at RL, and gemini2.5 is a bit too old now, so curious to see what they can deliver on this front. My guess is that we'll see the same kind of "it gets it" after scaled RL.

One note, that I've made after using gpt5 for a bit is that it seems to have this "gettheritis" with solving tasks. It wants to solve them so bad, that sometimes it forgets the plan, or rushes through step 5 after solving 1-4 pretty throughly. Might be prompting as well, maybe those havent yet caught up.