OLMo 2 - a family of fully-open language models

31 oldfuture 3 7/9/2025, 12:58:42 AM allenai.org ↗

Comments (3)

tripplyons · 6h ago
Nice to see how open it is! However, if you are just looking for the best model, Mistral Small 3.2 appears to be a stronger model with fewer parameters compared to OLMo 2 32B. It would be interesting to see how far these "fully open" models can get to their "open weight" counterparts.
real0mar · 5h ago
The inconvenient truth might be that the other models score higher than OLMO because they aren't restricted to purely "open and accessible" training data. Who knows what private or ethically dubious data went into training Mistral or llama, for example.
erlend_sh · 2h ago
Exactly. If we really wanted to benchmark the various models on the merits of their individual implementations, we should be comparing them all on the same open dataset.