I expect exponential growth but seems like growth has stalled in the last few weeks.
1. Is LLM usage really slowing down?
2. Are devs using some other router than OpenRouter?
3. Are devs using APIs directly with LLM providers or with big tech?
NitpickLawyer · 3h ago
> 3. Are devs using APIs directly with LLM providers or with big tech?
Right now it is cheaper to use one of the all you can prompt packages from the providers themselves. They are still massively subsidising token cost. Or they're overcharging the API per token cost. Either way, for a dev having a fixed cost even w/ limits often works out better. Running claude code w/ c4 can easily cost 20$ a session. Multiples of that if you're multitasking w/ lots of parallel instances running on multiple tasks.
jaggs · 2h ago
Pricing. If you're coding with the best (Claude family), it is now starting to get seriously expensive, as someone else said here. This reduction in use is likely to accelerate as more open source local model alternatives arrive like Qwen Coder. Evolution, baby.
1. Is LLM usage really slowing down?
2. Are devs using some other router than OpenRouter?
3. Are devs using APIs directly with LLM providers or with big tech?
Right now it is cheaper to use one of the all you can prompt packages from the providers themselves. They are still massively subsidising token cost. Or they're overcharging the API per token cost. Either way, for a dev having a fixed cost even w/ limits often works out better. Running claude code w/ c4 can easily cost 20$ a session. Multiples of that if you're multitasking w/ lots of parallel instances running on multiple tasks.