Your LLM provider will go down, but you don't have to

23 johnjwang 3 6/13/2025, 3:49:00 PM assembled.com ↗

Comments (3)

attaboy · 20h ago
With apologies to Randall Munroe, it sometimes feels like LLM providers are the new "project from some random person in Nebraska" https://imgur.com/a/qjAinj2
ceebzilla · 21h ago
This is interesting. The core models are clearly doing well as standalone businesses and have started to establish lock-in with end consumers (I've invested enough time tailoring GPT to me that I'm wary to switch to Claude or Gemini now), but for businesses relying on these models. But as a business leveraging these models, yeah, I think they are all fairly commoditized and why wouldn't you swap them out willy nilly based on best performance?
johnjwang · 20h ago
From the API standpoint, it makes a lot of sense for us to be able to provide different types of providers. And we've found also that different models/providers are better at different types of tasks. For example, the Gemini models have really great latency, which are good for specific types of tasks that are very latency sensitive, but we've found reasoning to be quite strong with OpenAI/Anthropic.