I stopped reading at the "it doesn't just X, it Y" which reading LLM content has made be allergic to (regardless of how it was actually written).
The point is good though, openAI and anthropic have a huge lead over other models, and benchmarks have meant nothing since almost the beginning (see this from 2023 https://x.com/karpathy/status/1737544497016578453 )
The reason isn't better pertaining though, it's an insane amount of data labeling for the fine tuning.
The point is good though, openAI and anthropic have a huge lead over other models, and benchmarks have meant nothing since almost the beginning (see this from 2023 https://x.com/karpathy/status/1737544497016578453 )
The reason isn't better pertaining though, it's an insane amount of data labeling for the fine tuning.