> One user, who asked not to be identified, said it has been impossible to advance his project since the usage limits came into effect.
Vibe limit reached. Gotta start doing some thinking.
dude250711 · 10s ago
He did not pass the vibe check.
Ataraxic · 1m ago
I need to see a video of what people are doing to hit the max limits regularly.
I find sonnet really useful for coding but I never even hit basic limits. at $20/mo. Writing specs, coming up with documentation, doing wrote tasks for which many examples exist in the database. Iterate on particular services etc.
Are these max users having it write the whole codebase w/ rewrites? Isn't it often just faster to fix small things I find incorrect than type up why I think it's wrong in English and have it do a whole big round trip?
buremba · 1h ago
They're likely burning money so I can't be pissed off yet, but we see the same Cursor as well; the pricing is not transparent.
I'm paying for Max, and when I use the tooling to calculate the spend returned by the API, I can see it's almost $1k! I have no idea how much quota I have left until the next block. The pricing returned by the API doesn't make any sense.
dfsegoat · 6m ago
Can you clarify which tooling you are using? Is it cursor-stats?
roxolotl · 49m ago
A coworker of mine claimed they've been burning $1k a week this month. Pretty wild it’s only costing the company $200 a month.
gerdesj · 36m ago
Crikey. Now I get the business model:
I hire someone for say £5K/mo. They then spend $200/mo or is it a $1000/wk on Claude or whatevs.
Profit!
Aurornis · 1h ago
I played with Claude Code using the basic $20/month plan for a toy side project.
I couldn't believe how many requests I could get in. I wasn't using this full-time for an entire workweek, but I thought for sure I'd be running into the $20/month limits quickly. Yet I never did.
To be fair, I spent a lot of time cleaning up after the AI and manually coding things it couldn't figure out. It still seemed like an incredible number of tokens were being processed. I don't have concrete numbers, but it felt like I was easily getting $10-20 worth of tokens (compared to raw API prices) out of it every single day.
My guess is that they left the limits extremely generous for a while to promote adoption, and now they're tightening them up because it’s starting to overwhelm their capacity.
I can't imagine how much vibe coding you'd have to be doing to hit the limits on the $200/month plan like this article, though.
dawnerd · 1h ago
I hit the limits within an hour with just one request in CC. Not even using opus. It’ll chug away but eventually switch to the nearing limit message. It’s really quite ridiculous and not a good way to upsell to the higher plans without definitive usage numbers.
eddythompson80 · 24m ago
Worth noting that a lot of these limits are changing very rapidly (weekly if not daily) and also depend on time of day, location, account age, etc.
cladopa · 59m ago
Thinking is extremely inefficient compared with the usual query in Chat.
If you think a lot, you can spend hundreds of dollars easily.
jmartrican · 1h ago
I have the $100 plan and now quickly get downgraded to Sonnet. But so far have not hit any other limits. I use it more on the weekends over several hours, so lets see what this weekend has in store.
I suspected that something like this might happen, where the demand will outstrip the supply and squeeze small players out. I still think demand is in its infancy and that many of us will be forced to pay a lot more. Unless of course there are breakthroughs. At work I recently switched to non-reasoning models because I find I get more work done and the quality is good enough. The queue to use Sonnet 3.7 and 4.0 is too long. Maybe the tools will improve reduce token count, e.g. a token reducing step (and maybe this already exists).
jasonthorsness · 9m ago
Is it really worth it to use opus vs. sonnet? sonnet is pretty good on its own.
sneilan1 · 20m ago
So far I’ve had 3-4 Claude code instances constantly working 8-12 hours a day every day. I use it like a stick shift though. When I need a big plan doc, switch to recommended model between opus and sonnet. And for coding, use sonnet. Sometimes I hit the opus limit but I simply switch to sonnet for the day and watch it more closely.
mpeg · 8m ago
Honest question: what do you do with them? I would be so fascinated to see a video of this kind of workflow… I feel like I use LLMs as much as I can while still being productive (because the code they generate has a lot of slop) and still barely use the agentic CLIs, mostly just tab completion through windsurf, and Claude for specific questions by steering the context manually pasting the relevant stuff
sneilan1 · 2m ago
I focus more on reading code & prompting claude to write code for me at a high level. I also experiment a lot. I don't write code anymore by hand except in very rare cases. I ask claude for questions about the code to build understanding. I have it produce documentation, which is then consumed into other prompts. Often, claude code will need several minutes on a task so I start another task. My coding throughput on a day to day basis is now the equivalent of about 2-3 people.
khurs · 43m ago
All you people who were happy to pay $100 and $200 a month have ruined it for the rest of us!!
rob · 34m ago
I don't think CLI/terminal-based approaches are going to win out in the long run compared to visual IDEs like Cursor but I think Anthropic has something good with Claude Code and I've been loving it lately (after using only Cursor for a while.) Wouldn't be surprised if they end up purchasing Cursor after squeezing them out via pricing and then merging Cursor + Claude Code so you have the best of both worlds under one name.
blibble · 1h ago
the day of COGS reckoning for the "AI" industry is approaching fast
ladon86 · 27m ago
I think it was just an outage that unfortunately returned 429 errors instead of something else.
jablongo · 1h ago
Id like to hear about the tools and use cases that lead people to hit these limits. How many sub-agents are they spawning? How are they monitoring them?
rancar2 · 1h ago
There was a batchmode pulled from the documentation after the first few days of the Claude Code release. Many of have been trying to be respectful with a stable 5 agent call but some people have pushed those limits much higher as it wasn’t being technically throttle until last week.
WJW · 15m ago
Tragedy of the commons strikes again...
TrueDuality · 53m ago
One with only manual interactions and regular context resets. I have a couple of commands I'll use regularly that have 200-500 words in them but it's almost exclusively me riding that console raw.
I'm only on the $100 Max plan and stick to the Sonnet model and I'll run into the hard usage limits after about three hours, that's been down to about two hours recently. The resets are about every four hours.
Capricorn2481 · 1h ago
I'm not on the pro plan, but on $20/mo, I asked Claude some 20 questions on architecture yesterday and it hit my limit.
This is going to be happening with every AI service. They are all burning cash and need to dumb it down somehow. Whether that's running worse models or rate limiting.
micromacrofoot · 1h ago
I've seen prompts telling it to spawn an agent to review every change it makes... and they're not monitoring anything
globular-toast · 43m ago
This is what really makes me sceptical of these tools. I've tried Claude Code and it does save some time even if I find the process boring and unappealing. But as much as I hate typing, my keyboard is mine and isn't just going to disappear one day, have its price hiked or refuse to work after 1000 lines. I would hate to get used to these tools then find I don't have them any more. I'm all for cutting down on typing but I'll wait until I can run things entirely locally.
MisterSandman · 1m ago
[delayed]
bigiain · 7m ago
> my keyboard is mine and isn't just going to disappear one day, have its price hiked or refuse to work after 1000 lines.
I dunno, from my company or boss's perspective, there are definitely days where I've seriously considered just disappearing, demanding a raise, or refusing to work after the 3rd meeting or 17th Jira ticket. And I've seen cow orkers and friends do all three of those over my career.
(Perhaps LLMs are closer to replacing human developers that anyone has realized yet?)
apwell23 · 1h ago
oh yea looks like everyone and their grandma is hitting claude code
Inside info is they are using their servers to prioritize training for sonnet 4.5 to launch at the same time as xAI dedicated coding model. xAI coding logic is very close to sonnet 4 and has anthropic scrambling. xAI sucks at making designs but codes really well.
Claude is absolute trash. I am on the paid plan and repeatedly hit the limits. and their support is essentially non existing, even for paid accounts
iwontberude · 1h ago
Claude Code is not worth the time sink for anyone that already knows what they are doing. It's not that hard to write boilerplate and standard llm auto-predict was 95% of the way to Claude Code, Continue, Aider, Cursor, etc without the extra headaches. The hangover from all this wasted investment is going to be so painful.
serf · 1h ago
>Claude Code is not worth the time sink
there are like 15~ total pages of documentation.
There are two folders , one for the home directory and one for the project root. You put a CLAUDE.md file in either folder which essentially acts like a pre-prompt. There are like 5 'magic phrases' like "think hard", 'make a todo', 'research..' , and 'use agents' -- or any similar set of phrases that trigger that route.
Every command can be ran in the 'REPL' environment for instant feedback, it itself can teach you how to use the product, and /help will list every command.
The hooks document is a bit incomplete last I checked, but it's a fairly straightforward system, too.
That's about it -- now explain vi/vim/emacs/pycharm/vscode in a few sentences for me. The 'time sink' is like 4 hours for someone that isn't learning how to use the computer environment itself.
freedomben · 2m ago
Yeah, Claude Code was by far the quickest/easiest for me to get set up. The longest part was just getting my API key
Sevii · 1h ago
I've spent far too much of my life writing boilerplate and API integrations. Let Claude do it.
axpy906 · 57m ago
I agree. It’s a lot faster to tell it what I want and work on something else in the meantime. You end up ready code diffs more than writing code but it saves time.
Implicated · 46m ago
Comments like this remind me that there's a whole host of people out there who have _no idea_ what these tools are capable of doing to ones productivity or skill set in general.
> It's not that hard to write boilerplate and standard llm auto-predict was 95% of the way to Claude Code, Continue, Aider, Cursor, etc without the extra headaches.
Uh, no. To start - yea, boilerplate is easy. But like a sibling comment to this one said - it's also tedious and annoying, let the LLM do it. Beyond that, though, is that if you apply some curiosity and that "anyone that already knows what they are doing" level prior knowledge you can use these tools to _learn_ a great deal.
You might think your way of doing things is perfect, and the only way to do them - but I'm more of the mindset that there's a lot of ways to skins most of these cats. I'm always open to better ways to do things - patterns or approaches I know nothing about that might just be _perfect_ for what I'm trying to do. And given that I do, in general, know what I'm asking it to do, I'm able to judge whether it's approach is any good. Sometimes it's not, no big deal. Sometimes it opens my mind to something I wasn't aware of, or didn't understand or know would apply to the given scenario. Sometimes it leads me into rabbit holes of "omg, that means I could do this ... over there" and it turns into a whole ass refactor.
Claude code has broadened my capabilities, professionally, tremendously. The way it makes available "try it out and see how it works" in terms of trying multiple approaches/libraries/databases/patterns/languages and how those have many times led me to learning something new - honestly, priceless.
I can see how these tools would scare the 9-5 sit in the office and bang out boilerplate stuff, or to those who are building things that have never been done before (but even then, there's caveats, IMO, to how effective it would/could be in these cases)... but to people writing software or building things (software or otherwise) because they enjoy it or because they're financial or professional lives depend on what they're building - absolutely astonishing to me anyone who isn't embracing these tools with open arms.
With all that said. I keep the MCP servers limited to only if I need it in that session and generally if I'm needing an MCP server in an on-going basis I'm better off building a tool or custom documentation around that thing. And idk about all that agent stuff - I got lucky and held out for Claude Code, dabbled a bit with others and they're leagues behind. If I need an agent I'ma just tap on CC, for now.
Context and the ability to express what you want in a way that a human would understand is all you need. If you screw either of those up, you're gonna have a bad time.
Vibe limit reached. Gotta start doing some thinking.
I find sonnet really useful for coding but I never even hit basic limits. at $20/mo. Writing specs, coming up with documentation, doing wrote tasks for which many examples exist in the database. Iterate on particular services etc.
Are these max users having it write the whole codebase w/ rewrites? Isn't it often just faster to fix small things I find incorrect than type up why I think it's wrong in English and have it do a whole big round trip?
I'm paying for Max, and when I use the tooling to calculate the spend returned by the API, I can see it's almost $1k! I have no idea how much quota I have left until the next block. The pricing returned by the API doesn't make any sense.
I hire someone for say £5K/mo. They then spend $200/mo or is it a $1000/wk on Claude or whatevs.
Profit!
I couldn't believe how many requests I could get in. I wasn't using this full-time for an entire workweek, but I thought for sure I'd be running into the $20/month limits quickly. Yet I never did.
To be fair, I spent a lot of time cleaning up after the AI and manually coding things it couldn't figure out. It still seemed like an incredible number of tokens were being processed. I don't have concrete numbers, but it felt like I was easily getting $10-20 worth of tokens (compared to raw API prices) out of it every single day.
My guess is that they left the limits extremely generous for a while to promote adoption, and now they're tightening them up because it’s starting to overwhelm their capacity.
I can't imagine how much vibe coding you'd have to be doing to hit the limits on the $200/month plan like this article, though.
If you think a lot, you can spend hundreds of dollars easily.
I suspected that something like this might happen, where the demand will outstrip the supply and squeeze small players out. I still think demand is in its infancy and that many of us will be forced to pay a lot more. Unless of course there are breakthroughs. At work I recently switched to non-reasoning models because I find I get more work done and the quality is good enough. The queue to use Sonnet 3.7 and 4.0 is too long. Maybe the tools will improve reduce token count, e.g. a token reducing step (and maybe this already exists).
I'm only on the $100 Max plan and stick to the Sonnet model and I'll run into the hard usage limits after about three hours, that's been down to about two hours recently. The resets are about every four hours.
This is going to be happening with every AI service. They are all burning cash and need to dumb it down somehow. Whether that's running worse models or rate limiting.
I dunno, from my company or boss's perspective, there are definitely days where I've seriously considered just disappearing, demanding a raise, or refusing to work after the 3rd meeting or 17th Jira ticket. And I've seen cow orkers and friends do all three of those over my career.
(Perhaps LLMs are closer to replacing human developers that anyone has realized yet?)
https://github.com/anthropics/claude-code/issues/3572
Inside info is they are using their servers to prioritize training for sonnet 4.5 to launch at the same time as xAI dedicated coding model. xAI coding logic is very close to sonnet 4 and has anthropic scrambling. xAI sucks at making designs but codes really well.
there are like 15~ total pages of documentation.
There are two folders , one for the home directory and one for the project root. You put a CLAUDE.md file in either folder which essentially acts like a pre-prompt. There are like 5 'magic phrases' like "think hard", 'make a todo', 'research..' , and 'use agents' -- or any similar set of phrases that trigger that route.
Every command can be ran in the 'REPL' environment for instant feedback, it itself can teach you how to use the product, and /help will list every command.
The hooks document is a bit incomplete last I checked, but it's a fairly straightforward system, too.
That's about it -- now explain vi/vim/emacs/pycharm/vscode in a few sentences for me. The 'time sink' is like 4 hours for someone that isn't learning how to use the computer environment itself.
> It's not that hard to write boilerplate and standard llm auto-predict was 95% of the way to Claude Code, Continue, Aider, Cursor, etc without the extra headaches.
Uh, no. To start - yea, boilerplate is easy. But like a sibling comment to this one said - it's also tedious and annoying, let the LLM do it. Beyond that, though, is that if you apply some curiosity and that "anyone that already knows what they are doing" level prior knowledge you can use these tools to _learn_ a great deal.
You might think your way of doing things is perfect, and the only way to do them - but I'm more of the mindset that there's a lot of ways to skins most of these cats. I'm always open to better ways to do things - patterns or approaches I know nothing about that might just be _perfect_ for what I'm trying to do. And given that I do, in general, know what I'm asking it to do, I'm able to judge whether it's approach is any good. Sometimes it's not, no big deal. Sometimes it opens my mind to something I wasn't aware of, or didn't understand or know would apply to the given scenario. Sometimes it leads me into rabbit holes of "omg, that means I could do this ... over there" and it turns into a whole ass refactor.
Claude code has broadened my capabilities, professionally, tremendously. The way it makes available "try it out and see how it works" in terms of trying multiple approaches/libraries/databases/patterns/languages and how those have many times led me to learning something new - honestly, priceless.
I can see how these tools would scare the 9-5 sit in the office and bang out boilerplate stuff, or to those who are building things that have never been done before (but even then, there's caveats, IMO, to how effective it would/could be in these cases)... but to people writing software or building things (software or otherwise) because they enjoy it or because they're financial or professional lives depend on what they're building - absolutely astonishing to me anyone who isn't embracing these tools with open arms.
With all that said. I keep the MCP servers limited to only if I need it in that session and generally if I'm needing an MCP server in an on-going basis I'm better off building a tool or custom documentation around that thing. And idk about all that agent stuff - I got lucky and held out for Claude Code, dabbled a bit with others and they're leagues behind. If I need an agent I'ma just tap on CC, for now.
Context and the ability to express what you want in a way that a human would understand is all you need. If you screw either of those up, you're gonna have a bad time.