I'm trying to cancel my Claude plan right now, because mid-research I got hit with a one hour time out for the first time, and I realized just how little I need the "expensive" research stuff and just want basic language refinement and some basic augmented knowledge search to get more terms to search the web. And all that basic stuff gets cutoff when you hit some invisible barrier that you have no way of controlling or monitoring.
But I literally can not cancel. Trying the app says "you signed up on a different platform, go there" but it doesn't tell me which platform that might be.
Trying to cancel on mobile web gives several upgrade options but no cancel options.
So, do I need to call my credit card? This is the worst dark pattern on subscription I have seen of any service I have ever paid for!
Anthropic had a fairly positive image in my head until they cut off my access and are not giving me a way to cancel my plan.
Edit: after mucking with the Stripe credit card payment options I found a cancel plan button underneath the list of all invoices. So there is an option, I just had a harder time finding it then I have had with other services. Successfully cancelled!
energy123 · 1d ago
Google and OpenAI have done similar things with their LLM offerings.
Gemini Advanced offered 2.5 Pro with nearly unlimited rate limits, then nerfed it to 100/day.
OpenAI silently nerfed the maximum context window of reasoning models in their Pro plan.
Accompanying the nerf is usually a psy op, like nerfing to 50/day then increasing it to 100/day so the anchoring effect reduces the grievance.
It's a smart ploy because as much as we like to say there's no moat, the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
So providers have an incentive to rope people in with a loss leader, and then rug pull once they gained market share. Maybe 40% of the top 5% of Claude users are now too accustomed to their Claude-based workflows, and inertia will keep them as customers, but now they're using the more expensive API instead. Anthropic won.
Modern bait and switch, although done intelligently so no laws are broken.
epistasis · 1d ago
I had been loyal, but am not any longer, so the ploy definitely did not work with me. I guess I'll move on to Gemini now, until I get sick of it.
To the degree there is a moat, I do not think it will be effective at keeping people in. I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company who I thought was the best actor in the space. I am happy that there is unlikely to be a dominant single winner like there was for web search or for operating systems. That is, unless there's a significant technological jump, rather than the same gradual improvement that all the AI companies are making.
__MatrixMan__ · 1d ago
Loyalty is for henchmen.
epistasis · 1d ago
I am loyal to my family and friends, business partners, and to those who I believe are fighting the good fight, as long as they continue the good fight. I do not consider myself a henchmen to any of them.
__MatrixMan__ · 15h ago
Loyalty is a conflict resolution tool. But when my relationships are such that nobody is pursuing contradictory goals to begin with, I find that it doesn't come up hardly at all.
On the rare occasion that it does, I try to circle back and mitigate the root cause so that I can resume a loyalty-free life thereafter.
scraptor · 11h ago
Interesting to see the definition of moat change from keeping other companies out to keeping your customers in.
tom_m · 14h ago
You probably won't get sick of Gemini. Your wallet may need to get used to an adjustment though.
Bluestein · 1d ago
> I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company
Likewise: a faulty, unproven, hallucinating, error-prone service, however good, was a good value at approx 25 USD/month in an "absolutely all you can eat", wholesale regime ...
... now? Reputational risk aside, they force their users to appraise their offering in terms of actual value offered, in the market.-
jwatte · 23h ago
> they force their users to appraise their offering in terms of actual value offered
That's a good thing, right?
Bluestein · 23h ago
Very much, absolutely, most resolutely and definitely, yes.-
fleebee · 1d ago
I don't even want to imagine how bad it will get if a legitimate moat does surface and users get entrenched ever deeper into the ecosystem of a single provider. A lot of the companies in this space have a track record of squeezing as much value out of their customers as they can get away with.
No comments yet
londons_explore · 1d ago
> the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
When a provider gets memory working well, I expect them to use this to be a huge moat - ie. they won't let you migrate the memories, because rather than being human readable words they'll be unintelligible vectors.
I imagine they'll do the same via API so that the network has a memory of all previous requests for the same user.
ileonichwiesz · 1d ago
Is memory all that useful for using these LLMs? I’ve found that I mostly use them for discrete tasks - helping me learn a specific thing, build a specific project, debug a specific piece of code, and once it’s done I’d actually prefer it to forget that thing instead of keeping it around forever.
Hell, “just open a new chat and start over” is an important tool in the toolbox when using these models. I can’t imagine a more frustrating experience than opening a new chat to try something from scratch only for it to reply based on the previous prompt that I messed up.
andyferris · 1d ago
Unintelligible vectors might be hard to transfer from one of their older models to one of their newer models - so I think the human readable words will remain a bit of a narrow waist in this space for the immediate future at least.
londons_explore · 1d ago
Gemini 2.5 Pro already uses encrypted 'thoughts' in the context window which are not visible to the user. They might be English words, or might be some other internal state vector.
tom_m · 14h ago
Google tried to offer a free tier for students and people to TRY it. That's a LOT different because they were up front about limits.
You pay for Gemini by the token and you get the full firehose. It costs money, but less than Opus and it smokes that.
It just works. Gemini 2.5 Pro is the king of AI coding and literally everything else has to catch up.
Trust me, I can't wait until there's a model that can run locally that's as good...but for now there isn't.
Always just look at the token cost and get used the token economics. Go into it paying. You'll get better results. I think people thinking they were somehow cheating and getting away with something similar (or better) for $20/mo are in for a big surprise.
I don't know if I would say they should have known better of course. I think Anthropic and Cursor and Windsurf were hiding it a bit. Now it's all coming out into the open and I guess you know the saying, if it's too good to be true...
ibaikov · 1d ago
I wonder if that's actually illegal, because it feels very close to false ads etc. It seems legal, but I think courts would side with customers.
As if google would say that yes, emails are $5/mo, but there's actually a limit on number of emails daily, and also number of characters in the email. It just feels so illegal to nerf a product that much.
Same with AI companies changing routing and making models dumber from time to time.
aspenmayer · 19h ago
If there is a material difference in the product that causes you to no longer feel that it's the same as it was when you subscribed, it could be considered a bait and switch? I think as soon as you notice that this is the case, you should probably stop paying them though, otherwise you might seem to accept this state of affairs. If you had a long term contract that didn't have some kind of language that tried to prevent this from happening in the first place, you could probably get out from under that contract by saying that the deal has essentially changed out from under you, but I think a lawyer might make that argument much better than me.
I'm not sure what harm you think you're suffering from, and what a proper remedy might be, if you think it's illegal. I don't know if I would go that far, as there are all kinds of words most terms of service use to somehow make it so that you have already acknowledged and agreed to whatever they decide to do. So a lawyer will probably be helpful there as well.
nojs · 1d ago
It’s not just switching costs though, what alternatives to Claude Code are as good right now?
danskeren · 1d ago
Claude definitely engage in some shady and illegal behavior. I wanted to see what the annual plan would cost as it was just displaying €170+VAT, and when I clicked the upgrade button to find out (I checked everywhere on the page) then I was automatically subscribed without any confirmation and without ever seeing the final price before the transaction was completed. Turns out that price was €206.50. Getting refunded was also a pain in the ass.
ponyous · 1d ago
€170 + 21.5% (Irish VAT rate) is €206.55. So not sure what you expected.
One complain though in EU B2C companies must show prices with VAT included.
qilo · 1d ago
By clicking, he expected to see the full price with VAT included, as required by EU regulations, without doing the math himself.
diggan · 1d ago
> €170 + 21.5% (Irish VAT rate) is €206.55. So not sure what you expected.
Parent clearly stated they only saw "€170+VAT" and not €206.55, so of course they expected to see €206.55 before the purchase went through. Not sure what anyone else would expect?
croemer · 18h ago
Even the ex VAT price is wrong. It should always just be 206.55 - can't tease low price first ex tax.
OtherShrezzing · 1d ago
You might not have noticed in all the frustration, but you got overcharged. 170 + 20% VAT is 204, not 206.50.
Maybe they added a card fee in at the end, but if they didn’t make that abundantly clear, they’ve broken a law in most countries which use the Euro.
Etheryte · 1d ago
VAT is not 20% everywhere worldwide.
Insanity · 1d ago
Why 20% VAT? Different EU countries have different tax brackets, so OP might not have had 20%.
hinkley · 1d ago
There’s a European country with a 21.470588235% tax rate? No I think GP is right and there’s a hidden 2.50 handling fee.
Insanity · 1d ago
Lol, that's a fair point, you're probably right on that.
tsoukase · 1d ago
Probably because VAT_floor(206.5/170)=20
jrs235 · 1d ago
Went to see if I could cancel my Claude pro plan just now. On billing I see adjust plan button. On that page I can upgrade but no where can I find downgrade/cancel. I checked the account page. I see delete account but "To delete your account, please cancel your Claude Pro subscription first."
Update: below the fold at the bottom of the Billing page is the cancel section and cancel button.
Update 2: just clicked cancel and was offered a promo of 20% off for three months...
Update 3: FYI, I logged in to my Claude account via computer (not iOS or Android).
93po · 1d ago
i tried this to see if i could get a discount and it just canceled it, lol. i wonder if its bc im a heavier user?
Waterluvian · 1d ago
My credit card web portal has a section for reversing transactions without having to call anyone. Maybe yours does too?
lukan · 1d ago
Reversing transactions is not cancelling the service I believe?
As long as you don't cancel, you do owe them money. But if they make cancelling intentionally hard, one would likely have a good case in court to still not pay, if one would want to go to court over this.
jonathantf2 · 1d ago
My bank app has a bit where it detects common services and tells you how to cancel them in the app, like Amazon Prime
pnvdr · 1d ago
nice feature.
so its like backcharge via app instead of calling credit card customer care.
do you mind sharing the credit card company
Renevith · 1d ago
This is getting to be the norm rather than a unique feature. I can dispute a credit card transaction through the app for my Citi, X1, or Fidelity cards.
Waterluvian · 1d ago
Tangerine Mastercard Canada.
benreesman · 1d ago
I've now got it behind OpenRouter on Bedrock and Vertex and am rotating in K2 and Qwen more and more as I learn how to get the most of them because the silent degredation and work avoidance heuristics are not a variable I need in an already difficult expectation management regime to my stakeholders.
At the rate the Chinese are going it won't be long before I can shake the dust off my sandals of this bullshit for good.
avereveard · 1d ago
Qwen3 coder is very good and a tenth of clcaude price on openrouter. And has a :free version for light usage.
I still revert to gemini pro 2.5 here and there and claude for specific demanding tasks, but bulk token go trough open weight model at the moment.
93po · 1d ago
i use privacy dot com cards to prevent this issue, takes 2 seconds to turn off the card in my extension
conartist6 · 1d ago
"I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not."
Ugh, anyone who says that and really believes it can no longer see common sense through the hype goggles.
It's just stupid and completely 100% wrong, like saying all musicians will use autotune in the future because it makes the music better.
It's the same as betting that there will be no new inventions, no new art, no works of genius unless the creator is taking vitamin C pills.
It's one of the most un-serious claims I can imagine making. It automatically marks the speaker as a clown divorced from basic facts about human ability
atonse · 1d ago
I disagree. While there are developers that truly build new technology and invent novel solutions, the overwhelming majority of developers who are paid for their work daily do pretty mundane and boring software development. They are implementing business logic, building forms, building tables, etc.
And AI already excels at building those sorts of things faster and with cleaner code. I’ve never once seen a model generate code that’s as ugly and unreadable as a lot of the low quality code I’ve seen in my career (especially from Salesforce “devs” for example)
And even the ones that do the more creative problem solving can benefit from AI agents helping with research, documentation, data migration scripts, etc.
conartist6 · 1d ago
I'm one working on novel tech, without AI. I've never been doing more valuable work or been more in command of my craft.
Yet the blanket statement is that I will fail and be replaced, and in fact that people like me don't exist!
So heck yeah I'll come clap back on that.
jennyholzer · 1d ago
Continuing to develop my craft in the 4th straight year of numbskulls claiming that Chat GPT is the end of history has made me extraordinarily confident in my ability to secure lucrative developer work for the rest of my life.
atonse · 1d ago
It is not productive to call people numbskulls and make straw-man arguments just because you don't agree with their statements, or may not have observed the same benefits that they have.
There is absolutely something real here, whether you choose to believe it or not. I'd recommend taking a good faith and open minded look at the last few months of developments. See where it can benefit you (and where it still falls way short).
So even if you may have arrived at your conclusion years ago, I assure you that things continue to improve by the week. You will be pleasantly surprised. This is not all or nothing, nor does it have to be.
jennyholzer · 1d ago
I think you're high on your own supply.
infecto · 1d ago
Your interpretation of the original quote is so far off that I would question your interpretation of the world. Sure novel stuff is being built but the vast majority of code at all sizes of companies has been written before in some iteration. Even for the novel work being done, it’s being surrounded by layers of code that has probably being written before. Are engineers going anywhere? No. But I also don’t think it’s far fetched to see a possible near term of competent engineers who use AI tools being more productive than the ones who don’t. I am not talking about copy and pasting but rather thoughtful use of tooling.
conartist6 · 1d ago
But "more productive" is a wording I also take issue with.
Code is like law. The goal isn't to have a lot of it. Come to me when by "more productive" you actually mean that the person using the LLM deleted more lines of code than anyone else around them while preserving the stability and power of the system
No comments yet
atonse · 1d ago
Right, I chose my words carefully when I said the "overwhelming majority" – and not "every single developer"
myko · 14h ago
> and with cleaner code
I use AI pretty extensively and encourage my folks to use it as well but I've yet to see this come directly from an LLM. With human effort after the fact, sure, but LLMs tend to write inscrutable messes when left to their own devices.
apwell23 · 1d ago
> the overwhelming majority of developers who are paid for their work daily do pretty mundane and boring software developmet
So are musicians. We think of them as doing creative stuff but a vast majority is mundane.
staunton · 1d ago
Most musicians (i.e. non-famous ones) get most of their income from teaching students. I don't think such a model makes sense for developers
(though who knows, maybe at some time in the future there will be significant numbers of people programming as a hobby and wanting to be coached by a human...)
hecanjog · 1d ago
It really depends. I think if there's a majority trend, it's just to have another job of any kind.
NiloCK · 1d ago
Do the people in this corner use compilers? Would they agree that programmers who don't use them* have been replaced by those that do?
*: I'm aware of cases like the recent ffmpg assembly usage that gave a big performance boost. When talking about industrial trend lines, I'm OK with admitting 0.001% exceptions.
(Apologies if it comes across as snarky or pat, but I honestly think the comparison is reasonable.)
nottorp · 1d ago
> Do the people in this corner use compilers? Would they agree that programmers who don't use them* have been replaced by those that do?
Are you aware compilers are deterministic most of the time?
If a compiler had a 10% chance of erasing your code instead of generating an executable you'd see more people still using assembly.
conartist6 · 1d ago
Compilers are systems that tame complexity in the "grug-brain" sense. They're true extensions of our senses and the information they offer should be provably correct.
The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!
bee_rider · 1d ago
Compilers were invented pretty early on in things… I wouldn’t be they shocked if the population of assembly programmers has remained constant.
Where would you put the peak? Fortran was invented in the 50’s. The total population of programmers was tiny back then…
diabllicseagull · 1d ago
this is kind of a funny example to me because of all the programming language and compiler discourse that's happening. analogies almost always miss the mark by hiding the nuances that need discussing, and this topic is no exception.
jennyholzer · 1d ago
the comparison is preposterous.
tzumaoli · 20h ago
When Fortran came out, I don't think a lot of people yelled at the assembly programmers and told them "learn Fortran or be replaced".
pklausler · 1h ago
If the assembly programmers were struggling with correctly optimizing loops for optimal performance on several distinct target machines, I would hope that their management would want them to try this new Fortran thing and see how well it worked. (And it did, and it enabled new companies like CDC to win customers from IBM.)
wat10000 · 1d ago
There are technologies that become de facto requirements for work in a field. For software, compilers and version control both qualify.
But... what else? These things are rare. It’s not like there’s a new thing that comes along every few years and we all have to jump on or be left behind, and LLMs are the latest. There’s definitely a new thing that comes along every few years and people say we have to jump on or be left behind, but it almost never bears out. Many of those ended up being useful, but not essential.
I see no indication that LLMs or associated tooling are going to be like compilers and version control where you pretty much can’t find anyone making a living in the field without them. I can see them being like IDEs or debuggers or linters where they can be handy but plenty of people do fine without them.
tirumario · 1d ago
Full disclosure, I'm on the Kilo Code team, but I read your analogy, and I have to respectfully disagree.
Musicians don't all use autotune, because autotune is a specalized technology used to elicit a specific result. BUT, MOST musicians use technology; either to record their work, or mix their tracks, or promote their work.
You could definitely say "A musician who doesn't post online to a platform or save their work in certain audio formats at the studio are going to be replaced by musicians who do."
Are there musicians who still release their work on vinyl or cassette tapes and prefer the sound of an stage with no microphones? Sure. But to dismiss the overarching influence of technology on the process would be ignoring where the progress is going.
I'd argue that people who use Kilo Code don't just "autotune" their code, they're using it as a tool that augments their workflow and lets them build more, faster. That's valuable to an employer. Where the engineer is still vital is in their ability to know what to ask for, how to ask for it, and how to correct the tool when it's wrong, because it'll never been 100% right.
It's just not hype, it's inevitable.
conartist6 · 1d ago
I actually agree with you that LLM assistance is inevitable. The fact that we can have small local models is what convinces me that the tech won't go away.
Even if things are going the direction you say, though, Kilo is still just a fork of VSCode. Lipstick on a pig, perhaps. I would bet that I know the strengths and weaknesses of your architecture quite a lot better than anyone on the Kilo team because the price of admission for you is not questioning any of VSCode's decisions, while I consider all of them worthy of questioning and have done so at great length in the process of building something from scratch that your team bypassed.
LeafItAlone · 1d ago
I believe it. Maybe not replace 100%, but effectively replace it.
I believe that at some point, AI will get good enough that most companies will eventually stop hiring someone that doesn’t utilize AI. Because most companies are just making crud (pun intended).
It’ll be like specialized programming languages. Some will exist, and they may get paid a lot more, but most people won’t fall into that category.
As much as we like to puff ourselves up, our profession isn’t really that hard. There are a relative handful of people doing some really cool, novel things. Some larger number doing some cool things that aren’t really novel, just done very nicely. And the majority of programmers are the rest of us. We are not special.
What I don’t know is the timing. I don’t expect it to be within 5 years (though I think it will _start_ in that time), but I do expect it within my career.
aeon_ai · 1d ago
AI isn't a stylistic preference or minor enhancement, but cognitive augmentation that allows developers to navigate complexity at scales human cognition wasn't designed for.
Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable, those who reject tools that fundamentally expand their problem-solving capacity will find themselves unable to compete with those who can architect solutions across larger possibility spaces on smaller teams.
Will it be used for absolutely every problem? No - There are clearly places where humans are needed.
But rejecting the enormous impact this will have on the workforce is trading hype goggles for a bucket of sand.
kibwen · 1d ago
> Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable
This passage forces me to concluse that this comment is sarcasm. Neither IDEs nor the use of Stack Overflow is anywhere near a requirement for being a professional programmer. Surely you realize there are people out there who are happily employed while still using stock Vim or Emacs? Surely you realize there are people out there who solve problems simply by reading the docs and thinking deeply rather than asking SO?
The usage of LLM assistance will not become a requirement for employment, at least not for talented programmers. A company gating on the use of LLMs would be preposterously self-defeating.
jraph · 1d ago
> cognitive augmentation that allows developers to navigate complexity at scales human cognition wasn't designed for
I don't think you should use LLMs for something you can't master without.
> will find themselves unable to compete
I'd wait a bit more before concluding so affirmatively. The AI bubble would very much like us to believe this, but we don't yet know very well the long term effects of using LLMs on code, both for the project and for the developer, and we don't even know how available and in which conditions the LLMs will be in a few months as evidenced by this HN post. That's not a very solid basis to build on.
aeon_ai · 1d ago
Two masters go head to head. One uses AI tools (wisely - after all, they're a master!), the other refuses to. Which one wins?
To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I think we simply don't have similar mental models for predicting the future.
jraph · 1d ago
> Which one wins?
We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.
What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.
> With as much capital as is going into
Yes, we are in a bubble. And some are predicting it will burst.
> the continued innovation
That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.
> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.
I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.
But that's not my strongest reason to avoid the LLMs anyway:
- I don't want to increase my reliance on SaaS (or very costly hardware)
- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).
There's a clear difference between "I have used these tools, tested their limits, and have opinions" and "I am consuming media narratives about why AI is bad"
AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.
As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
9rx · 1h ago
> The US Beef industry alone far outpaces the impact of AI.
Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!
Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...
jraph · 1d ago
> consuming media narratives about why AI is bad
That's quite uncharitable.
I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.
> AI presently has a far lower footprint on the globe than [X]
We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?
For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).
And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.
And I'm all for stopping the meat disaster as well.
> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Yep :-)
aeon_ai · 17h ago
It's not intended to be uncharitable - You clearly value many things I do (the world needs less attention-seeking, energy greedy shitty software).
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that:
A) The technology is not hype, and is getting better
B) That it can, and will, be built -- Time horizon debatable.
C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.
jraph · 5h ago
> I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience.
Okay, I think I got your intent better, thanks for clarifying.
You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.
But I hear you, grounded perspectives would be a positive.
> That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
I hear you as well, makes perfect sense.
OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.
The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.
We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.
Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.
(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)
danaris · 20h ago
> There's a clear difference between...
There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.
I think that the latter type might just have a bit of a bias in this matter?
aeon_ai · 17h ago
I'd be surprised if one needed to refer to my username to make a determination on me viewing the technology more optimistically, although I do chafe a tad at the notion that I don't come here for all types of content.
diggan · 1d ago
> I don't think you should use LLMs for something you can't master without.
I'm not sure, I frequently use LLMs for well-scoped math-heavy functions (mostly for game development) where I don't neccessarly understand what's going on inside the function, but I know what output I expect given some inputs, so it's easy for me to kind of blackbox test it with unit tests and iterate on the "magic" inside with an LLM.
I guess if I really stopped and focused on math for a year or two I'd be able to code that myself too, but every time I tried to get deeper into math it's either way too complex for me to feel like it's time well spent, and it's also boring. So why bother?
jraph · 1d ago
I can't comment on your well-scoped case. I can still see it backfire (in terms of maintenance) but it does seem you are precocious and are limiting the potential damages, with your unit tests you are increasing at least the level of confidence you can have on the LLM output and it's on a very specific part of your code.
I didn't have such cases in mind, was replying to the "navigate complexity at scales human cognition wasn't designed for" aspect.
laughingcurve · 1d ago
“I don’t think you should pay others for something you can’t master without [paying].” is one a hell of an argument to make. Good luck trying.
jraph · 1d ago
> is one a hell of an argument to make.
I agree, but it's not mine.
jennyholzer · 1d ago
IMO Version Control, IDEs, and Stack Overflow are many degrees of magnitude more valuable than GPT tools.
The use cases of these GPT tools are extremely limited. They demo well and are quite useful for highly documented workflows (E.G. they are very good at creating basic HTML/JS layouts and functionality).
However, even the most advanced GPT tools fall flat on their face when you start working with any sort of bleeding edge, or even just less-ubiquitous technology.
philipp-gayret · 1d ago
That is interesting, in my experience these tools work quite well on larger codebases but it depends how you use them and I haven't really found a counterexample. Do you maybe have a practical example, like a repo you could link that just doesn't work for AI?
jennyholzer · 1d ago
GPT tools give piss-poor suggestions when working with the Godot game engine.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
philipp-gayret · 8h ago
Thanks for the Godot example. I experimented with it through Claude Code (I have no prior experience with it). Got a Vampire Survivors-esque game working from scratch with plain shapes representing the game elements like player or enemies in 70 minutes on and off. It included a variety of 5 weapons, enemies moving to the player, player movement, little experience orbs on enemies expiring, a projectile and area of effect damage system, and a levelling up and upgrade system and UI which influenced weapon behaviours.
Godot with AI was definitely a worse experience than usual for me. I did not use the Godot editor. It seems like the development flow for Godot however is based around it. Scenes were generated through a Python script, which was of course written by Claude Code. Personally, I reviewed no line of code during the process.
My findings afterwards are;
1) Code quality was not good. Personally I have a year of experience working with Unity and online the code examples tend to be of incredibly poor quality. My guess is if AI is trained on the online corpus of game development forums, the output should be absolutely terrible. For the field of game development especially AI is tainted with this poor quality. It did indeed not follow modern practices, even after having hooked up a context MCP which provides code examples.
2) It was able to refactor the codebase to modern practices upon instructing it to; I told it to figure out what modern practices were and to apply them; it started making modifications like adding type hints and such. Commonly you would use predefined rules for this with an LLM tool, I did not use any for my experiment. That would be a one-time task after which the AI will prefer your way of working. An example for Godot can be found here: https://github.com/sanjeed5/awesome-cursor-rules-mdc/blob/ma...
3) It was very difficult to debug for Claude Code. The platform seems to require working with a dedicated editor, and the flow for debugging is either through that editor or by launching the game and interacting with it. This flow is not suitable at the moment for out of the box Claude Code or similar tools which need to be able to independently verify that certain functions or features work as expected.
> Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Not really - I work on developer experience and internal developer platforms. That is 80~90% Python, Go, Bash, Terraform and maybe a 10~20% Typescript with React depending on the project.
Kim_Bruning · 1d ago
I stopped using GPT for coding long ago, mostly because I use Claude.
With Claude I can even write IC10 code. (with a bit of help and understanding of how Claude works)
IC10 is a fictional, MIPS-like CPU in the game Stationeers. So that's pretty promising for most other things.
nottorp · 1d ago
Think larger codebases that do not involve node/npm...
philipp-gayret · 1d ago
The reason I ask is I work on developer experience, and I see feedback often online by developers that AI simply doesn't work for their flow. So I'm looking for real-world concrete examples on what AI development tools are stuggling with. Or, maybe it is how you use the tool? Personally I haven't run into limitations, so I'm really looking for hard counter-examples. The Godot example by the other poster was great, maybe you could provide another?
nottorp · 23h ago
Can't help because I haven't tried those bots that edit the code for you.
I just use "AI" instead of Google/SO when I need to find something out.
So far it mostly answers correctly, until the truthful answer comes close to "you can't do that". Then it goes off the rails and makes up shit. As a bonus, it seems to confuse related but less popular topics and mixes them up. Specific example, it mixes couchdb and couchbase when I ask about features.
The worst part is 'correctly' means 'it will work but it will be tutorial level crap'. Sometimes that's okay, sometimes it isn't.
So it's not that it doesn't work for my flow, it's that I can't trust it without verifying everything so what flow?
Edit: there's a codebase that i would love to try an "AI" on... if i wouldn't have to send my customer's code to $random_server with $random_security with $untrustable_promises_of_privacy. Considering how these "AI"s have been trained, I'm sure any promise that my code stays private is worth less than used toilet paper.
Gut feeling is the "AI" would be useless because it's a not-invented-here codebase with no discussion on StackOverflow.
n4r9 · 1d ago
This may not be the strong argument you think it is. There are plenty of highly productive senior developers who either don't use IDEs or SO or use them very minimally. Even version control, if they're working alone. Smart devs will find out how to be virtually as productive in a terminal as they would be with an IDE. Potentially more productive when solving edge case issues with processes that IDEs abstract
oblio · 1d ago
IDEs can be iffy, but any project bigger than a 20 line throwaway script needs/deserves version control.
contravariant · 1d ago
If reading your code requires navigating complexity that human cognition wasn't designed for then something has gone terribly wrong.
conartist6 · 1d ago
Really, scales human cognition wasn't designed for?
Human cognition wasn't designed to make rockets or AIs, but we went to the moon and the LLMs are here. Thinking and working and building communities and philosophies and trust and math and computation and educational institutions and laws and even Sci Fi shows is how we do
Chris2048 · 1d ago
> we went to the moon
We also killed quite a few astronauts.
conartist6 · 1d ago
RIP Grissom, White and Chaffee.
But the loss of their lives also proves a point: that achievement isn't a function of intelligence but of many more factors like people willing to risk and to give their lives to make something important happen in the world. Loss itself drives innovation and resolve. For evidence, look to Gene Kranz: https://wdhb.com/wp-content/uploads/2021/05/Kranz-Dictum.pdf
Chris2048 · 1d ago
In that case, the decision to launch was taken not by the astronauts risking their lives, but by NASA management, against the recommendation of Morton Thiokol's engineers. This was not simply an unfortunate "accident", but an entirely preventable gamble.
True, but did NASA in 1986 really need to learn this lesson?
This isn't (just) rocket science, it's the fundamentals of risk liability, legality and process that should be well established in a (quasi-military) agency such as this.
CrazyStat · 1h ago
Two different incidents. The parent was talking about Apollo 1, not Challenger.
conartist6 · 1d ago
Yeah I think they did need to learn it.
They knew they were taking some gambles to try to catch up in the Space Race. The urgency that justified those gambles was the Cold War.
People have a social tendency to become complacent about catastrophic risks when there hasn't been a catastrophe recently. There's a natural pressure to "stay chill" when the people around you have decided to do so. Speaking out about risk is scary unless there's a culture of people encouraging other to speak out and taking the risks seriously because they all remember how bad things can be if they don't.
Someone actually has to stand up and say "if something is wrong I really actually want to and need to know." And the people hearing that message have to believe it, because usually it is said in a way that it is not believed.
catmanjan · 1d ago
Maybe somehow this will be true in the future, but I am finding that as soon as you work on a novel or undocumented or non internet available problem it is just a hallucinating junior dev
mcny · 1d ago
The dirty secret is most of the time we are NOT working on anything novel at all. It is pretty much a CRUD application and it is pretty much a master detail flow.
NoGravitas · 1d ago
Even for completely uninteresting CRUD work, you're better off with better deterministic tooling (scaffolding for templates, macros for code, better abstractions generally). Unfortunately, past a certain low level, we're stuck rolling our own for these things. I'm not sure why, but I am guessing it has to do with them not being profitable to produce and sell.
jennyholzer · 1d ago
I work on novel technologies.
catmanjan · 1d ago
Yeah me too, better off buying something if it already exists
drw85 · 1d ago
I think this is a somewhat short sighted perspective. It's not really augmenting, but replacing cognition.
I see people starting to unlearn working by themselves rapidly and becoming dependant on GPT, making themselves quite useless in the process. They no longer understand what they're working with and need the help from the tool to work. They're also entirely helpless when whatever 'AI' tool they use can't fix their problem.
This makes them both more replaceable and less marketable than before.
It will have and already has a huge impact. But it's kinda like the offshoring hype from a decade ago.
Everyone moved their dev departments to a cheaper country, only to later realize that maybe cheap does not always mean better or even good. And it comes with a short term gain and a long term loss.
boleary-gl · 1d ago
Author and Kilo Code team member here - this is a much better explanation of what I mean. And honestly, it's a quick phrase I've been using that is shorthand really for THIS much better-written take.
mark_l_watson · 1d ago
+1, even though I mildly disagree with you. I pay For Gemini Pro by the year, and even though I don’t use it often, it is still high value. There are obvious things like generating a Bash shell script quickly - and other things I rarely do, are simple, and I save 5 minutes here and there. Sometimes code generation can be useful, in moderation.
But the big thing is using AI to learn new things, explain some tricky math in a paper I am reading, help brain storm, etc. The value of AI is in improving ourselves.
jennyholzer · 1d ago
> explain some tricky math in a paper I am reading
To me this seems to be the single most valuable use case of newer "AI tools"
> generating a Bash shell script quickly
I do this very often, and to me this seems to me the second most valuable use case of newer "AI tools"
> The value of AI is in improving ourselves
I agree completely.
> help brain storm
This strikes me as very concerning. In my experience, AI brainstorming ideas are exceptionally dull and uninspired. People who have shared ideas from AI brainstorming sessions with me have OVERWHELMINGLY come across as AI brained dullards who are unable to think for themselves.
What I'm trying to say is that Chat GPT and similar tools are much better suited for interacting with closed systems with strict logical constraints, than they are for idea generation or writing in a natural language.
mark_l_watson · 1d ago
For brainstorming: when I write out a plan for a writing or coding project, I like to ask questions for ‘what am I missing or leaving out?’, etc.
Really, it is like students using AI: some are lazy and expect it to do all the work, some just use it as a tool as appropriate. Hopefully I am not misunderstanding you and others here, but I think you are mainly complaining about lazy use of AI.
Kim_Bruning · 1d ago
A surprisingly large number of musicians do use things like written music (a medieval invention, but came into its own in the renaissance) or amplifiers (a modern one).
conartist6 · 1d ago
Sure but that's different than saying "if you don't use those things you are worthless and will fail"
Kim_Bruning · 1d ago
Ah, I personally read it as "Use of good tools makes you more competitive",
but you're right that "I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not." could have multiple other readings too.
boleary-gl · 1d ago
Author and Kilo Code team member here - yes your interpretation is what I meant...but to be fair as you mention it could be read multiple ways.
Let's be fair - I made it intentionally a little provocative :)
conartist6 · 1d ago
Haha I'm glad to hear it, because I do the same of course.
What I might not have mentioned is that I've spent the last 5 years and 20,000 or so hours building an IDE from scratch. Not a fork of VSCode, mind you, but the real deal: a new "kernel" and integration layer with abilities that VSCode and its forks can't even dream of. It's a proper race and I'm about to drop the hammer on you.
shepherdjerred · 1d ago
This is true for 99% of developers who aren't particularly talented or driven, e.g. the average engineer who treats their job as a job and not their passion.
bubblyworld · 1d ago
Lol, I think making confident claims in either direction is total copium at this point. It's abundantly obvious that LLM-based tools are useful, it's just a question of what we'll settle on using them for and to what degree.
Nobody knows how this will play out yet. Reality does not care about your feelings, unfortunately.
theshrike79 · 1d ago
Just completely ignoring LLMs is like not wanting to use autocomplete or language servers, preferring to hand-craft everything.
But on the other hand there is the other end who think AGI coming in a few months and LLMs are omniscient knowledge machines.
There is a sweet spot in the middle.
chuckadams · 1d ago
There's certainly room for debate, but I've taken to just ignoring the people who proclaim that AI is merely a passing fad or a useless parlor trick. I can't predict much, but I can confidently say I won't be the only one ignoring them.
NoGravitas · 1d ago
For me, the sweet spot seems to be "keep an eye on LLM developments, but don't waste work time trying to use them yet".
theshrike79 · 10h ago
I'd suggest using LLMs as "Autocomplete on steroids" and see how that goes.
If online models aren't your thing twinny.dev + ollama will make it fully local.
bubblyworld · 22h ago
Yeah, that makes sense to me for a lot of reasons. Recently decide to take the plunge and start introducing AI tools in my startup. Fairly cheap experiment, all things considered - three months of giving it a try as a team and then we'll make a call. Time will tell!
delfinom · 1d ago
O no, it's not hype. The problem is the sentence is incomplete.
a developer using AI in a low-cost region will replace any developer in a high cost region ;)
TrackerFF · 1d ago
When you offer "unlimited", you're immediately exposing yourself to the 0.1% of userbase that will absolutely treat it is unlimited. Whale users/hoarders. This has been a known problem since the dawn of hosting...hell, it goes back to times before computers.
One thing I miss for the other users, i.e. the casual users that never use anywhere near of their quota, is rollover. If you haven't used your quota this month, the unused will roll over to the next month.
thimabi · 1d ago
Things would be much easier if, instead of “unlimited”, companies simply offered great usage limits. Yes, hoarders would still exist, but their impact would be negligible.
Even better: provide a counter displaying both remaining usage available and the quota reset time.
But companies probably earn so much money from the vast majority of users that having good and clear limits would only empower them to actually benefit as much from the product as they can.
jollyllama · 1d ago
It simply shouldn't be called unlimited, then.
const_cast · 22h ago
This is the crux of the issue.
Every company wants the marketing of unlimited, but none of them want the accountability.
vertoc · 1d ago
I think the hard part with these ai models is it’s kind of hard to figure out how many tokens you’re going to use, especially as a new user. 4,000 tokens sounds like a lot for instance, but is tiny
thimabi · 23h ago
If companies offered any sort of counter, users would quickly understand their usage patterns and adapt accordingly. But users are kept in the dark on purpose, often not knowing how many tokens they use, and sometimes not even knowing what are the token limits.
fuzzzerd · 1d ago
While I don't disagree that determining usage is incredibly difficult, it doesn't take a genius to see that something sold "by the million" is something you can expect to to use a lot of.
ollybee · 1d ago
Two problem with offering usage limits is the real limit you could offer if all users hit it, is low. The users with usage far below the limits feel they are getting a bad deal, compared to if they can't see the limits and they don't hit them, they feel they have "unlimited!".
Spooky23 · 1d ago
I disagree. People will always hoard a constrained resource. Things like phone service and AI tools find their value when people use them freely.
The AI models have a bunch of different consumption models aimed at different types of use. I work at a huge company, and we’re experimenting with different ways of using LLMs for users based on different compliance and business needs. The people using all you can eat products like NotebookLM, Gemini, ChatGPT use them much more on average and do more varied tasks. There is a significant gap between low/normal/high users.
People using an interface to a metered API, which offers a defined LLM experience consume fewer resources and perform more narrowly scoped tasks.
The cost is similar and satisfaction is about the same.
OldfieldFund · 1d ago
An even more insidious method is selling "lifetime access," offering incredible over-the-top value, then exiting after 2 or 3 years when the momentum starts dropping. These are essentially rug pulls/exit scams.
There is no such thing as "unlimited" or "lifetime" unless it's self-hosted.
windward · 1d ago
>This has been a known problem since the dawn of hosting...hell, it goes back to times before computers.
yep
>Adverse selection has been discussed for life insurance since the 1860s,[3] and the phrase has been used since the 1870s.[4]
JumpCrisscross · 1d ago
> 0.1% of userbase that will absolutely treat it is unlimited. Whale users/hoarders
This is somewhat a different issue that’s largely accepted by courts and society bar that one neighbour who is incensed they can’t run a rack off their home internet that was marketed unlimited.
Aurornis · 1d ago
You’re fully right and I’ve seen it play out at a startup. It’s unreal to see how that 0.1% userbase can find insane ways to use your service. For some people, it becomes a game to discover your rate limits and develop a service that goes just under the rate limit 24/7
In some cases, people discover creative ways to resell the service. Anthropic mentioned they suspect this was happening.
The weirdest part about this whole internet uproar, though, is that Anthropic never offered unlimited usage. It was always advertised as higher limits.
Yet all the comment threads about it are convinced it was unlimited and now it’s not. It’s weird how the internet will wrap a narrative around a story like this.
CraigRood · 1d ago
Thing is, even with users that don't use the quota, these AI companies are still losing money. This isn't a case of the small users paying for large.
The true costs of AI are yet to unravel.
One does find it occasionally. My mobile phone plan (in Sweden) currently says that I have 774G left of my 25G/month quota for example :)
Philpax · 1d ago
Thank you, Hallon - I've been on the same plan for the last four years and I don't think I've ever checked my data quota :)
johnisgood · 1d ago
So it accumulates for you? That is nice. I wish my unused quota would go into the next month, but it resets. :(
kqr · 1d ago
One might wonder why you pay for a plan with 25 GB/mo when you use so little that you have over 2.5 years of it saved!
I thought I had a low usage with my 1.5 years' worth saved. Only reason I paysfor that plan is anything lower and my provider does not offer rollover.
dcminter · 1d ago
Because it's inexpensive and I'd rather pay marginally over the odds than ever have to think about it (I'm not even sure there was a cheaper plan).
ajsnigrutin · 1d ago
Sometimes these are the smallest usable packages.
Eg here in slovenia, if you want unlimited calls and texting, you get 150GB in your "package" for 9.99eur, but you somehow can't save that data for the next month.
Data limits are easier to keep "unlimited" by slowing the speed, which was already finite, to barely servicable levels. If it is clearly laid out this seems a reasonable solution.
dcminter · 1d ago
If they've ever done that then I've never noticed it - and mine doesn't claim to be unlimited.
raverbashing · 1d ago
Still limited. But "unlimited" to most people
In the same way your next-door supermarket has effectively "infinite soup cans" for the needs of most people.
wat10000 · 1d ago
Unlimited means for a flat rate. Pay per use/item, like at the supermarket, isn’t it.
raverbashing · 23h ago
Still, you can't get "unlimited" soup cans from your next door supermarket even paying
wat10000 · 19h ago
Which is why they don’t sell it as “unlimited.” The gripe isn’t that there are limits, it’s that it’s sold as “unlimited” but there are limits.
raverbashing · 8h ago
Which to any reasonable person (and it's on the fine print) should mean reasonable limits
But I guess some people do really need "Eggs: contain eggs" in their egg carton otherwise they will throw a legal fit
epistasis · 1d ago
I hit the limit with a single set of queries yesterday, the very very first time I ever clicked on that "Research" button. And it stopped me from using any of their services at all, without warning. I am a very light user of queries, less than one a day on the cheaper models.
alt227 · 1d ago
I havent seen the word unlimited used in advertising without an asterisk for a long long time.
tsoukase · 1d ago
All-you-can-eat allowance at restaurans comes to mind
bayindirh · 1d ago
I went to a particular sports bar back in the day. They have "all you can eat ribs" plate. The first comes fully treated with sauce and is very, very delicious.
When you order the second plate, it comes without the sauce and it tastes flatter. You're filled at this point and you can't order the third.
Very creative and fun if you ask me. I was prepared for this though, because the people we went together said how it's going to go, exactly.
tsoukase · 21h ago
I was thinking about self serviced "buffet", where the lowest quality starts from then first plate.
anthonypasq · 1d ago
in cursor you can just set a monthly spend limit at api pricing. seems like what you are looking for
sjsdaiuasgdia · 1d ago
Exactly. "Unlimited" is practically guaranteed to become a lie for any service that provides some amount of utility and lasts long enough to accumulate a significant number of users.
Nothing in our world is truly unlimited. Digital services and assets have different costs than their physical counterparts, but that just means different limits, not a lack of them. Electrical supply, compute capacity, and storage are all physical things with real world limits to how much they can do.
These realities eventually manifest when someone tries to build an "unlimited" service on top of limited components, similar to how you can't build a service with 99.999% reliability when it has a critical piece that can only get to 99.9%.
sasmithjr · 1d ago
I didn't look at the URL at first and was surprised when this turned in to an ad. Oh well!
> Stop selling "unlimited", when you mean "until we change our minds"
The limits don't go in to affect until August 28th, one month from yesterday. Is there an option to buy the Max plan yearly up front? I honestly don't know; I'm on the monthly plan. If there isn't a yearly purchase option, no one is buying unlimited and then getting bait-and-switched without enough time for them to cancel their sub if they don't like the new limits.
> A Different Approach: More AI for Less Money
I think it's really funny that the "different approach" is a limited time offer for credits that expire.
I don't like that the Claude Max limits are opaque, but if I really need pay-per-use, I can always switch to the API. And I'd bet I still get >$200 in API-equivalents from Claude Code once the limits are in place. If not? I'll happily switch somewhere else.
And on the "happily switch somewhere else", I find the "build user dependency" point pretty funny. Yes, I have a few hooks and subagents defined for Claude Code, but I have zero hard dependency on anything Anthropic produces. If another model/tool comes out tomorrow that's better than Claude Code for what I do, I'm jumping ship without a second thought.
theshrike79 · 1d ago
TBH paying yearly for ANY LLM tool at this time is just pure insanity.
The field is moving so fast that whatever was best 6 months ago is completely outdated.
And what is top tier today, might be trash in a few months.
piker · 1d ago
The second sentence is revealing. The creation of the 787 didn’t make the 747 “trash”.
theshrike79 · 10h ago
You also couldn't just unsubscribe from the 747 and get a 787 a the same price.
Services are not the same thing as physical goods.
piker · 58m ago
True, but the statement sort of implies that the models are intrinsically "trash" today, and we're just waiting for that to be revealed by the next relatively superior model.
mentalgear · 1d ago
E.g.: Apple's new "iCare" offers "unlimited" repairs by subscribing for 20 USD / month!
This is standard in virtually every insurance program. There are a lot of studies showing that even the tiniest amount of cost sharing completely changes how people use a service.
When something is unlimited and free, it enticed people to abuse it in absurd ways. With hardware, you would get people intentionally damaging their gear to get new versions for free because they know it costs them nothing.
diabllicseagull · 1d ago
sounds a lot like any other insurance where the next part of the game is finding reasons why they can't provide the promised repairs.
That's crazy, I misinterpreted just like others, that you pay a monthly but repairs are free. But on re-reading the terms, it turns out this is just pure bullshit. Why would anyone buy this "new iCare"?!
tonyhart7 · 1d ago
apple fanboy/fangirl
jajko · 1d ago
Pretty typical of Apple, and the reason for so is its users, whole world knows that Apple can charge premium and sometimes deliver it, sometimes cheaper competition is better or more durable.
Don't blame the company, it acts within boundaries allowed by its paying customers, and apple customers are known to be... much less critical of the company and its products to be polite, especially given its premium prices.
philistine · 1d ago
> and apple customers are known to be... much less critical of the company
This is patently false and has been for the whole existence of Apple. Apple customers are voraciously critical of the company. Just probably not under the delta of importance that you consider.
sudhirj · 1d ago
Did the Max plan ever promise unlimited anything? I’m on it, and I remember seeing and paying for 20x, not infinity.
There is a case to be made that they sold a multiple and are changing x or rate limiting x differently, but the tone seems different from that.
davidbarker · 1d ago
No, it's never promised unlimited — it's always had usage limits: 20× the usage of their regular Pro plan, with a limit of 50 sessions per month (a session being a 5-hour window), although I don't know if they ever enforced this.
So even if these mystery people Anthropic reference who did run it "in the background, 24/7", they still would've had to stay within usage limits.
Aurornis · 1d ago
I can’t find any indication it was ever sold as unlimited.
It always had limits and those limits were not specified as concrete numbers.
It’s amazing how much of the internet outrage is based on the idea that it was unlimited and now it’s not. The main HN thread yesterday was full of comments complaining about losing unlimited access.
It’s so weird to watch people get angry about thinking they’re losing something they never had. Even Anthropic said less than 5% of accounts would even notice the new limits, yet I’ve seen countless comments raging that “everyone must suffer” due to the actions of a few abusing the system.
paulhodge · 16h ago
Yeah the outrage is a little artificial and definitely premature.
Some facts for sanity:
1- The poster of this blog article is Kilocode who makes a (worse) competitor to Claude Code. They are definitely capitalizing on this drama as much as they can. I’ve been getting hit by Reddit ads all day from Kilocode, all blasting Anthropic, with the false claim that their plan was "unlimited".
2- No one has any idea yet what the new limits will be, or how much usage it actually takes to be in the top 5% to be affected. The limits go into effect in a few days. We'll see then if all the drama was warranted.
mrits · 1d ago
Looking at their pricing page in way back machine looks like they've had "usage limits" terminology for at least the last year
bananapub · 1d ago
> Did the Max plan ever promise unlimited anything?
I feel like this is where the heavyweights are really going to start throwing their weight around and basically drive the smaller startups out of town.
Can you really ever compete when you are renting someone else's GPUs?
Can you really ever compete when you are going up against custom silicon built and deployed at scale to run inference at scale (i.e. TPUs built to run Gemini and deployed by the tens-of-thousands in data centers around the globe)?
Meta and Google have deep pockets and massive existing world-class infrastructure (at least for Google, Meta probably runs their php Facebook thing on a few VPS dotted around in some random colos /s ) . They've literally written the book on this.
It remains to be seen how much more money OpenAI can burn, but we've started to see how much Anthropic can burn if nothing else.
cedws · 1d ago
There’s only one way unlimited plans work: either the seller gets scammed, or the buyer does.
When companies sell unlimited plans, they’re making a bet that the average usage across all of those plans will be low enough to turn a profit.
These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
Aurornis · 1d ago
> When companies sell unlimited plans,
Anthropic never sold an unlimited plan
It’s amazing that so many people think there was an unlimited plan. There was not an unlimited plan.
> These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
Correct! And they did. And now Anthropic is changing those limits in a month.
> LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
This exists. You use the API. It has always been an option. Again, I’m confused about why there’s so much anger about something that already exists.
The subscriptions are nice for people who want a consistent fee and they get the advantage of a better deal for occasional heavy usage.
cedws · 1d ago
>Anthropic never sold an unlimited plan
I'm told the $200/month plan was practically unlimited, I heard you could leave ~10 instances of Claude Code running 24/7. I will never pay for any of these subscriptions however so I haven't verified that.
>And now Anthropic is changing those limits in a month.
Which indicates the seller was being scammed. Now they're changing the limits so it swings back to being a scam for the user.
>I’m confused about why there’s so much anger about something that already exists
Yes but much LLM tooling requires a subscription. I'm not talking only about Anthropic/Claude Code. I can't use chatgpt.com using my own API key. Even though behind the scenes, if I had a subscription, it would be calling out to the exact same API.
redhale · 1d ago
Nothing is stopping you from using the API directly, if you prefer to donate more money to Anthropic.
I would not personally, as I can't spend thousands per month on an agentic tool. I hope they figure out limits that work. $100 / $200 is still a great deal. And the predictability means my company will pay for it.
senko · 1d ago
I am using Claude daily, exclusively via the API (in Zed, added my own token) and spend a few bucks a day tops.
Unlimited plans encourage wasting resources[0]. By actually paying for what you use, you can be a bit more economical and still get a lot of mileage out of it.
$100/$200 is still a great deal (as you said), but it does make sense for actually-$2000 users to get charged differently.
0: In my hometown, (some) people have unlimited central heating (in winter) for a fixed fee. On warmer days, people are known to open windows instead of turning off the heating. It's free, who cares...
Aeolun · 1d ago
> LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
Because Claude Code is absolutely impossible to use without a subscription? I’m fine with being limited, but I’m not with having to pay more than $200/month
Anybody that feels they’re not getting enough out of their subscription is welcome to use API instead.
Aurornis · 1d ago
> Because Claude Code is absolutely impossible to use without a subscription?
Claude Code accepts an API key. You do not need a subscription
I mean the way it works quickly leads to exorbitant costs. Subscription based pricing with limits keeps everyone honest.
superasn · 1d ago
I'm not a fan of usage caps either, but that Reddit post [1] (“You deserve harsh limits”) does highlight a perspective worth considering.
When some users burn massive amounts of compute just to climb leaderboards or farm karma, it’s not hard to imagine why providers might respond with tighter limits—not because it's ideal, but because that kind of behavior makes platforms harder to sustain and less accessible for everyone else. Because on the other hand a lot of genuine customers are canceling because they get API overload message after paying $200.
I still think caps are frustrating and often too blunt, but posts like that make it easier to see where the pressure might be coming from.
so where did 'bad users' come from if users were simply doing what they were allowed to?
why is anthropic tweeting about 'naughty users that ruined it for everyone' ?
pastor_williams · 1d ago
What are the commons? Why would a tragedy just appear there or of the blue?
shlant · 1d ago
are you going to answer what the bait and switch was?
crashprone · 22h ago
Bait : "For 200$ a month you get to use Claude 20x more than what the Pro users are entitled to. You don't know how much exactly though, but neither do we. We may limit your usage with weekly and monthly limits. Sounds good?"
Switch: "We limited your usage weekly and monthly. You don't know how those limits were set, we do but that's not information you need to know. However instead of choosing to hoard your usage out of fear of hitting the dreaded limit again, you've kept it again and again, using the product exactly the way it was intended to and now look what you've done."
bananapub · 1d ago
I don't understand your comment.
they launched Claude Max (and Pro) as being limited. it was limited before, and it's limited now, with a new limit to discourage 24/7 maxing of it.
in what way was there a bait and switch?
aeon_ai · 1d ago
The bait-and-switch critique is valid, but the real pragmatic issue is that AI companies are discovering their unit economics don't support flat-rate pricing for compute-intensive services. Try running AWS on a Netflix subscription model.
The transparency problem compounds this. The sustainable path forward likely involves either much more transparent/clear usage-based pricing or significantly higher flat rates that actually cover heavy usage.
Nerdx86 · 6h ago
There are real capacity limits here. Supply and demand are serious forces. When you have a top tier product, you can do a lot but at some point so many people come that it overloads capacity. Last week I had 3 Claude code opus windows happily burning over $100 worth of API credits simultaneously documenting an old code base. This was after a senior engineer had spent 2 weeks learning how to have new code interact with it. And they were timing out and/or hitting excessive cap while my regular teams account just returned "too busy".
Claude 4 is good enough that people will pay whatever they ask as long as it's significantly less than the cost of doing it by hand. The loss leaders will need to fade away to manage the demand, now that there is significant value.
mmillin · 1d ago
Did Anthropic ever use the term unlimited? I understand the general frustration with the pattern, but it seems weird to put unlimited in quotes when it wasn’t the way Claude was sold.
seunosewa · 1d ago
Nope.
MOARDONGZPLZ · 1d ago
I would absolutely do this if I ran anthropic. Of course unlimited implicitly means “unlimited without abuse.” There are always these “power users” who run it 25/8 and use all the resources, or sell access to others. To Anthropic: of course the “power users” are also going to be the top 5% of extremely online folks who are going to angrily pen long form tweet storms and this whole thing will die down soon for the other 95% of us. Weather the storm.
yjftsjthsd-h · 1d ago
> Of course unlimited implicitly means “unlimited without abuse.”
So, not unlimited? Like, if the abuse is separate from amount of use (like reselling; it can be against ToS to resell it even in tiny amounts) then sure, but if you're claiming "excessive" use is "abuse", then it is by any reasonable definition not unlimited.
mwigdahl · 1d ago
It was never sold as unlimited. Max plans have always had both rate limits and 5-hour usage limits.
MOARDONGZPLZ · 1d ago
> So, not unlimited?
Correct, not “unlimited” as in the dictionary definition of unlimited. Unlimited as in the plain meaning of unlimited as it is commonly used this subject matter area. i.e., Use it reasonably or hit the bricks, pal.
carlosjobim · 1d ago
This is the reason why fast food restaurants don't sell unlimited refills for their soda in Europe. People would demand the right to go and get ten sodas there every day of their life, since they had an "unlimited" refill.
ajsnigrutin · 1d ago
So at what point does "normal usage" stop and "abuse" start? How many "queries" per day is that?
If there is a clear limit to that (and it seems there is now), then stop saying "unlimited" and start selling "X queries per day". You can even let users pay for aditional queries if needed.
(yes i know queries is not a proper term to use here, but the principle stands)
croisillon · 1d ago
funny because i read several sentences in my chatgpt inner voice:
"The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption."
"It's not rocket science. It’s our way of attracting users. Not bait and switch, but credits to try."
svachalek · 1d ago
This wording is becoming like a record scratch, whatever I’m reading I just stop and flinch when I hit one of these. It just needs to be followed up with some vapid praise or weird joke that a robot would find funny.
baggachipz · 1d ago
Another front page post which is a thinly-veiled advertisement for a competitor's service. The advertising model is as such: Point out user frustration from another product, harp on it, and then casually mention that your product is better for reasons.
thevueguy · 1d ago
Lol, this article was written with AI. The clues are everywhere.
> The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption.
> When developers get "rate limit exceeded" while debugging at 2 AM, they're not thinking about your infrastructure costs—they're shopping for alternatives.
Notice a pattern here?
No comments yet
Kim_Bruning · 1d ago
One way to look at it: instead of negotiating up front, they fired their top 5% customers.
raincole · 1d ago
There is a leaderboard encouraging people to waste as many tokens as possible [0]. I'm quite sure Anthropic is very eager to get rid of these "customers."
I’m pretty sure the top entries there are fabricated. You can’t burn that much with just a regular max subscription.
raincole · 1d ago
Obviously it's easy to cheat (as the token usage is only known by the user and Claude and this leaderboard is a third-party). But the existence of this kind of thing makes me believe people who use a lot of tokens != loyal customers of Claude.
mwigdahl · 1d ago
To quote the movie Zoolander: "I feel like I'm taking crazy pills!"
Anthropic never sold Max plans as unlimited. There are two tiers, explicitly labeled "5x" and "20x", both referring to the increased usage over what you get with Pro. Did all the people complaining that Anthropic reneged on their "promise" of unlimited usage not read anything about what they were signing up to pay $100 or $200/month for? Or are they not even customers?
redeyedtreefrog · 1d ago
The huge positive publicity boost that Claude Code gained during its unlimited phase was surely easily worth the slight negative publicity it is getting now. This post is just marketing copy by a small competitior that can't afford to do the same thing.
Aurornis · 1d ago
> gained during its unlimited phase
It was never unlimited.
They never advertised unlimited usage. The Max plan clearly said it had higher limits.
This fabrication of a backstory is so weird. Why do so many people believe this?
redeyedtreefrog · 1d ago
Fair enough, substitute "unlimited" for "extremely high limits that most people very rarely hit"
poszlem · 1d ago
The sad thing is that you are probably right.
raffael_de · 1d ago
Unlimited as well as life-time pricing schemes more often than not imply an unhealthy and unsustainable shift in mentality towards sell-out. Companies soon after usually go bankrupt, exit scam or are sold.
picafrost · 1d ago
Claiming that we shouldn't accept bait and switch as normal is denying the fundamental reality of the "hypergrowth" SaaS business model. The beatings will _always_ continue until the bottom-line improves. Price transparency for the average retail user is irrelevant to B2B contracts which is where every SaaS churns their butter.
phito · 1d ago
We need better local LLMs to stop relying on these predatory companies. Anthropic is not even 5 years old (and Claude 2yo) and already showing their teeth.
And to be clear, the users abusing the "unlimited" rates they were offering to do absolutely nothing productive (see vibe-coding subreddits) are no better.
redhale · 1d ago
This is just an ad for Kilo Code with a very dumb take.
Claude Max, to my knowledge, was never marketed as "unlimited". Claude Max gives you WAY more tokens then $100/$200 would buy. When you get rate limited, you have the option to just use the API. Overall, you will have gotten more value than just using the API alone.
And you always had, and continue to have, the option of just using the API directly. Go nuts.
The author sounds like a petulant child. It's embarrassing, honestly.
mwigdahl · 1d ago
I was going to write much the same thing but just upvoted you instead. Max plans are a super great deal, even with these new limits.
Tactician_mark · 1d ago
Reminds me of the Stop Killing Games movement. If you give people an end date, they can plan around it.
nertirs1 · 1d ago
Agree, businesses should just provide a real number of services, that they are willing to provide. That allows customers to more effectively compare different service providers and allows providers to avoid the situation, when they have to secretly throttle certain users.
But it is so hard to explain to product people, that there is a limit how much certain services can scale and be profitably supported.
ygritte · 1d ago
Does "unlimited" ever mean anything else than "until we change our minds"? Who would expect to get services asymptotically for free forever?
bigstrat2003 · 1d ago
You're right, and one can go even further: every single service is "until we change our minds", unless you have a contract guaranteeing certain terms for some time period. That's not nefarious, that's just how business works. In my experience when people complain about the terms of business changing, what truly bothers them is the specifics of the change, not the fact that it's changing. In this case I don't use any AI tools (because they suck, frankly), so I don't have enough background to judge if the change is reasonable or not. But I don't think one should really hold it against a business that they change the service offering over time.
cadamsdotcom · 21h ago
In Australia they outlawed advertising “unlimited” broadband, because it was a deceptive term. Providers actually meant “slower than a modem after a certain amount of download”.
Everyone was better off without the deception. Now we are in the early days of AI. Providers should be honest but won’t until forced to.
Because just think about it. Unlimited is untenable. Another example, in the early days of broadband in Australia a friend’s parents were visited by a Telstra manager because he “downloaded more than his entire suburb”. A manager!
Really you can’t blame the providers; some users will ruin it for everyone. I am not saying that is anyone specific. But none of this should surprise us. We’ve been here before. Just look back at how other markets developed & you will see patterns that tell you what’s next.
IlikeKitties · 1d ago
If a company promises "unlimited" I usually don't believe them with very very few exceptions such as unlimited internet on a home internet.
ConfusedDog · 1d ago
It was not true years ago when I was using Cox. It said unlimited, but in fact had a upload/download limit. I think download was 500GB/month, upload was far less. I'm using Fios now and apparently it's truly unlimited unless my excessive usage is "unfair" to other users. I guess that kind of language is left to be interpreted by Verizon.
dec0dedab0de · 1d ago
not sure if this is a joke, but most home internet plans get throttled after some secret limit.
IlikeKitties · 1d ago
Not here in Germany at least as far as i know, though i'm mostly limited by bandwidth anyway. In practical terms maybe if you really use excessive amounts of data on a residential line but i've never heard of anyone hitting those limits and i'm seeding torrents 24/7
IncreasePosts · 1d ago
That still sounds unlimited, just like there's a natural limit when you're unthrottled.
echoangle · 1d ago
That's not unlimited, or every mobile data plan in the EU would be unlimited too. You only get throttled very aggressively (64 kBit/s mostly) after reaching the quota, but you still can't call it unlimited.
bravesoul2 · 1d ago
Which is limited by the top speed
evaXhill · 1d ago
There is no free lunch when VC money is involved, and the new weekly rate limits in Claude code now are just the latest example of that. I knew there was going to be a rugpull and it definitely changes the psychology of using it when you know you could be locked out for multiple days. Also saying that "Less than 5% of users are affected" = anyone actually USING this tool for real work...
oellegaard · 1d ago
There is no such thing as unlimited. Everything is subject to fair use and it seems they are now communicating those limits clearly.
celticninja · 1d ago
they are communicating the limit exists clearly, they are not `communicating those limits clearly`
mark_l_watson · 1d ago
I get that many people prefer Claude and Claude Code to Gemini 2.5 Pro and gemini-cli, but things seem more transparent with Google. For short sessions I run gemini-cli in free mode, and when I suspect I might hit the timeout for getting Gemini 2.5 Pro switching to Gemini 2.5 Flash during my session, I set my Gemini key and run and get the Pro model as long as I need it in a session.
Gemini did go from a huge free tier to 100 free uses a day, but I expected that.
EDIT: let me clarify: I just retired after over 50 very happy years working as a software developer and researcher. My number one priority was always self-improvement: learning new things and new skills that incidentally I could sometimes use to make money for whoever was paying me. AI is awesome for learning and general intellectual pursuits, and pairs nicely with reading quality books, listening to lectures on YouTube, etc.
haritha-j · 1d ago
To be honest, this was inevitable, the pricing is unsustainable. I see it as sort of similar to those days when Uber eats used to offer all of those coupons for free food. I still fondly recall the good old days during my undergrad when Uber eats came into my city and we feasted like kings, because it cost next to nothing to buy a new mobile number. But my point is, we always knew the music was going to stop, we always knew it was unsustainable, we just enjoyed the VC capital funded burritos while they lasted. When I read unlimited, i just automatically assume there's a catch (ISPs in my home country have been offering 'unlimited' internet with a whole plethora of catches for decades now).
rrrx3 · 21h ago
I use my $20/mo plan with Claude Code pretty happily, regularly hitting the limits through the day at a good pace, with nice cool-downs in between while I wait.
I promise I’m not being snarky here - I don’t understand how people are burning through their $200/mo plan usage so quickly. Are they spamming prompts? Not using planning mode? I’ve seen a few folks running multiple instances at once… is that more common than I think?
nottorp · 1d ago
What I found interesting from the comments in this thread is not that their services were unlimited or not, and whether it's fair that they added more limits.
Just below me as I type there's a comment saying they're refusing to cancel a subscription (may not be below me any more when I finish typing).
Somewhere lower there's a comment saying they do not show the full price when you subscribe, but add taxes on top of it and leave you to notice the surprise on your credit card statement.
Is there an ethical "AI" service anywhere?
voxleone · 1d ago
Come to think of it, there could be real benefits — possibly unexpectedly strong ones — in being bold enough to just say: “Unlimited… until we decide otherwise.”
Differentiation through honesty: In a market full of fluff, directness stands out. Customers might respect a brand more for telling the truth plainly, even if the truth isn’t ideal.
The risk: It could scare off some customers who don’t read the fine print anyway. But that may not be a loss—it might actually filter in the right kind of customer, the one who wants to know what they’re really getting.
nhinck2 · 1d ago
Any business that changes the terms of a subscription service unilaterally should be forced to cancel all current subscriptions and have the users go through the sign up process again.
burnt-resistor · 14h ago
It's always bait-and-switch fraud by a marketing con. Get users hooked before shaking them down. This is the MBA way.
jstummbillig · 1d ago
This is such a lame take. What exactly is the proposed solution, practically, in a industry where nothing is predictable and everything moves super quickly? Just be conservative and don't try things that can escalate?
Let people cook and give them some time find out how to do this. Voice discontent but don't be an asshole.
Ekaros · 1d ago
I wonder what is the ratio of people who feel like they have to cap any offering they get? That is if you promise them say 1TB of cloud storage, they will fill that with anything, just to feel like they got moneys worth.
And how does this compare to case with "Unlimited". Overall will the total used be higher or lower?
No comments yet
ozim · 1d ago
Also stop promising you are "open source and free" - "until you get market fit or VC money".
rob_c · 1d ago
I'm fairness most projects who do that it tends to end badly for them. Just look at Elastic for a very high profile example
cr125rider · 1d ago
Man that’s a name I haven’t heard in a while. I remember the ELK stack
neurostimulant · 1d ago
> When developers get "rate limit exceeded" while debugging at 2 AM, they're not thinking about your infrastructure costs—they're shopping for alternatives.
What's the alternative when every other vendors (eventually) have the same limit?
Bilal_io · 1d ago
I am not a heavy user of Claude in any way, sometimes I go days without using it. And I've hit the limit a week ago. Now with the limit changing to per week, I am 100% canceling my subscription. I'll likely spend a max of $5 using an API key.
juujian · 1d ago
What really gets me is that my use of ai is really uneven. That "5% of users" is me one month of the year. In other words, it's unlimited use, unless I really need it. Not a fair business model for sure.
aitchnyu · 1d ago
Tangential, who will be severely slowed down if you dont have models costing 15 USD/million tokens (Claude Opus), 3$/MT (Claude Sonnet) and had to switch to V3 (0.3$/MT) or cheaper ones?
gloosx · 1d ago
These are very greedy kind of companies which will try to squeeze every penny out of their users eventually. In this LLM-made article, they seem to be... surprised about this?
imglorp · 1d ago
Pay by usage (tokens in, tokens out?) is the most transparent.
In subscription plans, users who aren't using 100% of their subscription subsidize other users, which is opaque and not really fair.
theturtle · 1d ago
It's always limited, if by nothing more than physics and time. But yeah, the UNLIMITED* ads need to go eat a grenade.
_________
*here's where we tell you the limits we said we didn't have
QuiCasseRien · 1d ago
unlimited is a marketing word but once it turns to juridical, it sucks.
some say they have to define a huge limit and that's it.
Limits are sometime hard to define :
- they must be such huge, a user (human) finaly understand it's unlimited else he will compare to competitors
- but no such huge because the 0.1% of users will try to reach it
A fair word could be the one which categorize the type of use :
- human : human has physical limit (e.g: typing word to keyboard / per time).
- bot : from 1 arduino to heavyweight hardcore clusting, virtualy no limits.
tonyhart7 · 1d ago
why people on comment talking like its unlimited???? its never promised unlimited in the first place????
they just sells you at 20x more usage limit???? nothing tells me unlimited
bravesoul2 · 1d ago
Only so much energy in the observable universe, after all.
techsystems · 1d ago
Unlimited plans are generally grandfathered. I don't know if I came across any company that didn't honour and grandfather the unlimited.
svnt · 1d ago
Shallow marketing piece.
“Claude is promising unlimited and it isn’t sustainable.”
“For a limited time only pay us $20 and get $80 worth of credits.”
Look at what these people on HN said!
Come on.
Retr0id · 1d ago
Alternatively, stop locking yourself into a single vendor that can arbitrarily change their pricing or entire product strategy at any moment.
shock · 1d ago
Can I create an account with email on Kilo Code or only "Sign in with Google"? I couldn't see such an option.
boleary-gl · 1d ago
Would you use GitHub auth?
shock · 1d ago
Yes. I would still prefer the old email/pass though, so I don't depend on a 3rd party being online, etc.
quaristice · 1d ago
"Do these sound like the actions of a man who had all he could eat?"
- Lionel Hutz
dcre · 1d ago
This is an ad. I’m surprised people are responding like it’s someone’s earnest opinion.
sixhobbits · 1d ago
I think it's totally OK to experiment with pricing and rapidly iterate. I dropped my ChatGPT sub for Anthropic because it was great value and Claude Code was a 'game changer' product.
I hold them no ill will for rapidly changing pricing models, raising pricing, doing whatever they need to do in what must be a crazy time of finding insane PMF in such a short time
BUT the communication is basically inexcusable IMO. I don't know what I'm paying for, I don't know how much I get, their pricing and product pages have completely different information, they completely hide the fact that Opus use is restricted to the Max plan, they don't tell you how much Opus use you get, their help pages and pricing pages look they were written by an intern and pushed directly to prod. I find out about changes on Twitter/HN before I hear about them from Anthropic.
I love the Claude Code product, but Anthropic the company is definitely nudging me to go back to OpenAI.
This is also why competition is great though - if one company had a monopoly the pricing and UX would be 20x worse.
CamelCaseName · 1d ago
The problem is resellers that buy a $200 license and then sell 1,000 $10 licenses.
BrouteMinou · 1d ago
Well, obviously no!
How am I supposed to bait people into my product to screw them up then?
Find a better alternative.
nathancspencer · 1d ago
Still unlimited enough to manage to write your blog post with it
Bluestein · 1d ago
It's like love: "I will love you forever".-
zwnow · 1d ago
What people don't realize is that love isn't just having butterflies in your stomach all the time. Love changes over time, and saying I will love you forever is reasonable if sincere. I love all of my friends and I will love them forever. If I say that to my SO I mean it, even if we separate. People just want the initial rush of love until the chemicals fade.
johnisgood · 1d ago
That is true. Love, to me, is commitment. We no longer feel that rush we felt in the beginning. Things may be calm, which people mistake for boredom, which they mistake for doing something wrong or something is being wrong. People tend to move on after the "honeymoon phase", and keep repeating it ad infinitum. I have exes. I do not hate them, we never separated on bad terms. I do not love them romantically, however. I love my current girlfriend and I am committed to her. I have no intentions giving it up. That said, one must not confuse love with endurance. You cannot just keep on forgiving, always hoping for the best next time. You need to get out (ideally peacefully) after a while. I learned my lesson. In any case, I will continue trying to work things out with my current girlfriend, and we are doing fine, even though we have a lot of baggage and things are difficult sometimes. Of course there are some deal-breakers that I did not care about with one of my exes, I kept on forgiving even after she has cheated on me. It is not something I should have done, yet I did it, because I confused love with endurance.
Thoughts? If you want me to, I can elaborate on what I really mean, but I hope it was understandable enough.
zwnow · 1d ago
Trust is really important to me and I couldnt forgive a cheater, but that does not mean I won't love the person they are. Sure I would be hurt which is human, and I would definitely cut them out of my life, but there will forever be the part of them that I love. Otherwise I wouldn't have been this hurt.
johnisgood · 1d ago
I agree. I could never forgive a cheater either, and I was stupid to mistake love with forgiving endlessly.
why they didn't just limit 5% that doing 24/7 useless task to be on leaderboard burning token ????
why the whole users need to suffer???
Aurornis · 1d ago
The new limits should only affect that 5%
I don’t know why you think everyone is going to suffer.
tonyhart7 · 1d ago
they are affecting everyone, if this only limit 5% abuser then the whole sub reddit wouldn't under fire yesterday
nashashmi · 1d ago
I think the difference this time is that there is no lock in with a contract like AT&T customers had when they bought iPhones.
Unlimited for startups work better because they have zero idea on load challenges that come in the future. And they don’t have much idea how well their product will be taken in the market.
Anthropic got the experience and decided they needed to maximize on reasonableness over customer trust. And they are a startup so we all get this.
OTOH there is no such thing as unlimited. Atoms in the universe are finite. Your use is finite. Your time is finite. Your abuse is limited and finite. You are a sucker for believing in the unlimited myth just like think others are suckers for believing in divine intervention or conspiracy theorists are suckers to believe in unlimited power.
mystifyingpoi · 1d ago
> OTOH there is no such thing as unlimited
Philosophical meandering and blaming the customer for not understanding company's shady marketing is not something I'd consider to be cool.
ozgung · 1d ago
I believe this goes deeper than a simple bait-and-switch. It’s straight out of Silicon Valley’s venture-backed startup playbook: burn money until the world dominance. That’s one of the reasons non-US or non-SV startups fail to compete. Not because their products or services are inferior, but because they can’t replicate this aggressive pricing strategy. They don’t have deep enough pockets to survive until reaching billions of users.
For Claude Code and similar services, we’re still in the very early stages of the market. We’re using AI almost for free right now. It’s clear this isn’t sustainable. The problem is that they couldn’t even sustain it at this earliest stage.
nunez · 1d ago
How nobody saw this coming perplexes me.
blibble · 1d ago
you think it's bad now?
wait until their investors get fed up with pouring money down the drain and demand they make a profit from the median user
that model training and capex to build the giant DCs and fill them with absurdly priced nvidia chips isn't free
as an end user: you will be the one paying for it
Dolores12 · 1d ago
you call them "power users", i call them abusers. Nice move anthropic, good for everyone.
dboreham · 1d ago
When uttered by a sales or marketing person, "unlimited" actually means "limited" in a regular person's vocabulary.
bayindirh · 1d ago
Companies do what companies do despite promising to not to do this time. News at 11.
On a more serious note, I'm sure most of the people can't fathom or even think about the resources they are consuming when using AI tools. This things doesn't use energy, they consume it like how a black hole sucks light.
In some cases, your queries can consume your home's daily energy needs in a hour or so.
scblock · 1d ago
While I agree with the base statement that makes up the title, I cannot get behind the rest of this idea that the heaviest users somehow "matter most". They cost the most, certainly. But how do they matter the most?
But hey, this is just a sales pitch from one company I wouldn't trust by taking a dump on another company I wouldn't trust.
grim_io · 1d ago
No need to gaslight us into this "unlimited" nonsense claim.
It was not communicated as unlimited.
When all is said and done, the Claude subscription is still an incredible value compared to the alternatives.
It changes nothing for my use case.
> Here's how the AI pricing bait-and-switch works:
> You can tell how it's intentional with both OpenAI and Anthropic by how they're intentionally made opaque. I cant see a nice little bar with how much I've used versus have left on the given rate limits
Well, that's a scam. A legal one.
bananapub · 1d ago
the reaction to all of this, and this article in particular, is really very very stupid.
> The new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.
obviously everyone wants everything for free, or cheap, and no one wants prices to change in a way that might not benefit them, but the endless whinging from people about how unfair it is that anthropic is limiting access to their products sold as coming with limited access is really extremely tedious even by HN standards.
and as pointed out dozens of times in these threads, if your actual worry is running out of usage in a week or month, Anthropic has you covered - you can just pay per token by giving Claude Code an API key. doing that 24/7 will cost ~100x what Max does though, I wonder if that's a useful bit of info about the situation or not?
sneak · 1d ago
It’s not a bait and switch unless you count the payment made for this specific month.
Otherwise it’s just a change in the offering. You can unsubscribe freely.
This sort of entitlement puts me off. Prices for things change all of the time.
esher · 1d ago
"forever free" is another one.
SpaceNoodled · 23h ago
The parallels between AI slop tools and drugs are ever growing.
Bluestein · 1d ago
"You are absolutely right! But first, a word from our sponsors ..."
apwell23 · 1d ago
anthropic is blaming its users "we wouldn't have to do this if you guys weren't so naughty. No TV for you. "
thimabi · 1d ago
That’s just typical for any company that promises unlimited usage and then can’t deliver. The sad truth is that the “unlimited” wording attracts tons of users who will never come close to being a power user making a dent in their profits.
mwigdahl · 1d ago
That would be a great argument if they ever promised unlimited usage. They didn't.
blitzar · 1d ago
I ate all my vegetables at dinner, you promised tv if I ate all the broccoli.
apwell23 · 1d ago
you ate too much dessert that we left it out. you were supposed to know not eat too much.
blitzar · 20h ago
It was a cupcake, was I meant to lick it and put the rest back?
Bluestein · 1d ago
"You will chew on your rock and you will like it!"
But I literally can not cancel. Trying the app says "you signed up on a different platform, go there" but it doesn't tell me which platform that might be.
Trying to cancel on mobile web gives several upgrade options but no cancel options.
So, do I need to call my credit card? This is the worst dark pattern on subscription I have seen of any service I have ever paid for!
Anthropic had a fairly positive image in my head until they cut off my access and are not giving me a way to cancel my plan.
Edit: after mucking with the Stripe credit card payment options I found a cancel plan button underneath the list of all invoices. So there is an option, I just had a harder time finding it then I have had with other services. Successfully cancelled!
Gemini Advanced offered 2.5 Pro with nearly unlimited rate limits, then nerfed it to 100/day.
OpenAI silently nerfed the maximum context window of reasoning models in their Pro plan.
Accompanying the nerf is usually a psy op, like nerfing to 50/day then increasing it to 100/day so the anchoring effect reduces the grievance.
It's a smart ploy because as much as we like to say there's no moat, the user does face provider switching costs (time and effort), which serves as a mini-moat for status quo provider.
So providers have an incentive to rope people in with a loss leader, and then rug pull once they gained market share. Maybe 40% of the top 5% of Claude users are now too accustomed to their Claude-based workflows, and inertia will keep them as customers, but now they're using the more expensive API instead. Anthropic won.
Modern bait and switch, although done intelligently so no laws are broken.
To the degree there is a moat, I do not think it will be effective at keeping people in. I had already been somewhat disillusioned with the AI hype, but now I am also disillusioned with the company who I thought was the best actor in the space. I am happy that there is unlikely to be a dominant single winner like there was for web search or for operating systems. That is, unless there's a significant technological jump, rather than the same gradual improvement that all the AI companies are making.
On the rare occasion that it does, I try to circle back and mitigate the root cause so that I can resume a loyalty-free life thereafter.
Likewise: a faulty, unproven, hallucinating, error-prone service, however good, was a good value at approx 25 USD/month in an "absolutely all you can eat", wholesale regime ...
... now? Reputational risk aside, they force their users to appraise their offering in terms of actual value offered, in the market.-
That's a good thing, right?
No comments yet
When a provider gets memory working well, I expect them to use this to be a huge moat - ie. they won't let you migrate the memories, because rather than being human readable words they'll be unintelligible vectors.
I imagine they'll do the same via API so that the network has a memory of all previous requests for the same user.
Hell, “just open a new chat and start over” is an important tool in the toolbox when using these models. I can’t imagine a more frustrating experience than opening a new chat to try something from scratch only for it to reply based on the previous prompt that I messed up.
You pay for Gemini by the token and you get the full firehose. It costs money, but less than Opus and it smokes that.
It just works. Gemini 2.5 Pro is the king of AI coding and literally everything else has to catch up.
Trust me, I can't wait until there's a model that can run locally that's as good...but for now there isn't.
Always just look at the token cost and get used the token economics. Go into it paying. You'll get better results. I think people thinking they were somehow cheating and getting away with something similar (or better) for $20/mo are in for a big surprise.
I don't know if I would say they should have known better of course. I think Anthropic and Cursor and Windsurf were hiding it a bit. Now it's all coming out into the open and I guess you know the saying, if it's too good to be true...
As if google would say that yes, emails are $5/mo, but there's actually a limit on number of emails daily, and also number of characters in the email. It just feels so illegal to nerf a product that much.
Same with AI companies changing routing and making models dumber from time to time.
I'm not sure what harm you think you're suffering from, and what a proper remedy might be, if you think it's illegal. I don't know if I would go that far, as there are all kinds of words most terms of service use to somehow make it so that you have already acknowledged and agreed to whatever they decide to do. So a lawyer will probably be helpful there as well.
Parent clearly stated they only saw "€170+VAT" and not €206.55, so of course they expected to see €206.55 before the purchase went through. Not sure what anyone else would expect?
Maybe they added a card fee in at the end, but if they didn’t make that abundantly clear, they’ve broken a law in most countries which use the Euro.
Update: below the fold at the bottom of the Billing page is the cancel section and cancel button.
Update 2: just clicked cancel and was offered a promo of 20% off for three months...
Update 3: FYI, I logged in to my Claude account via computer (not iOS or Android).
As long as you don't cancel, you do owe them money. But if they make cancelling intentionally hard, one would likely have a good case in court to still not pay, if one would want to go to court over this.
At the rate the Chinese are going it won't be long before I can shake the dust off my sandals of this bullshit for good.
I still revert to gemini pro 2.5 here and there and claude for specific demanding tasks, but bulk token go trough open weight model at the moment.
Ugh, anyone who says that and really believes it can no longer see common sense through the hype goggles.
It's just stupid and completely 100% wrong, like saying all musicians will use autotune in the future because it makes the music better.
It's the same as betting that there will be no new inventions, no new art, no works of genius unless the creator is taking vitamin C pills.
It's one of the most un-serious claims I can imagine making. It automatically marks the speaker as a clown divorced from basic facts about human ability
And AI already excels at building those sorts of things faster and with cleaner code. I’ve never once seen a model generate code that’s as ugly and unreadable as a lot of the low quality code I’ve seen in my career (especially from Salesforce “devs” for example)
And even the ones that do the more creative problem solving can benefit from AI agents helping with research, documentation, data migration scripts, etc.
Yet the blanket statement is that I will fail and be replaced, and in fact that people like me don't exist!
So heck yeah I'll come clap back on that.
There is absolutely something real here, whether you choose to believe it or not. I'd recommend taking a good faith and open minded look at the last few months of developments. See where it can benefit you (and where it still falls way short).
So even if you may have arrived at your conclusion years ago, I assure you that things continue to improve by the week. You will be pleasantly surprised. This is not all or nothing, nor does it have to be.
Code is like law. The goal isn't to have a lot of it. Come to me when by "more productive" you actually mean that the person using the LLM deleted more lines of code than anyone else around them while preserving the stability and power of the system
No comments yet
I use AI pretty extensively and encourage my folks to use it as well but I've yet to see this come directly from an LLM. With human effort after the fact, sure, but LLMs tend to write inscrutable messes when left to their own devices.
So are musicians. We think of them as doing creative stuff but a vast majority is mundane.
(though who knows, maybe at some time in the future there will be significant numbers of people programming as a hobby and wanting to be coached by a human...)
*: I'm aware of cases like the recent ffmpg assembly usage that gave a big performance boost. When talking about industrial trend lines, I'm OK with admitting 0.001% exceptions.
(Apologies if it comes across as snarky or pat, but I honestly think the comparison is reasonable.)
Are you aware compilers are deterministic most of the time?
If a compiler had a 10% chance of erasing your code instead of generating an executable you'd see more people still using assembly.
The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!
Where would you put the peak? Fortran was invented in the 50’s. The total population of programmers was tiny back then…
But... what else? These things are rare. It’s not like there’s a new thing that comes along every few years and we all have to jump on or be left behind, and LLMs are the latest. There’s definitely a new thing that comes along every few years and people say we have to jump on or be left behind, but it almost never bears out. Many of those ended up being useful, but not essential.
I see no indication that LLMs or associated tooling are going to be like compilers and version control where you pretty much can’t find anyone making a living in the field without them. I can see them being like IDEs or debuggers or linters where they can be handy but plenty of people do fine without them.
Even if things are going the direction you say, though, Kilo is still just a fork of VSCode. Lipstick on a pig, perhaps. I would bet that I know the strengths and weaknesses of your architecture quite a lot better than anyone on the Kilo team because the price of admission for you is not questioning any of VSCode's decisions, while I consider all of them worthy of questioning and have done so at great length in the process of building something from scratch that your team bypassed.
I believe that at some point, AI will get good enough that most companies will eventually stop hiring someone that doesn’t utilize AI. Because most companies are just making crud (pun intended). It’ll be like specialized programming languages. Some will exist, and they may get paid a lot more, but most people won’t fall into that category. As much as we like to puff ourselves up, our profession isn’t really that hard. There are a relative handful of people doing some really cool, novel things. Some larger number doing some cool things that aren’t really novel, just done very nicely. And the majority of programmers are the rest of us. We are not special.
What I don’t know is the timing. I don’t expect it to be within 5 years (though I think it will _start_ in that time), but I do expect it within my career.
Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable, those who reject tools that fundamentally expand their problem-solving capacity will find themselves unable to compete with those who can architect solutions across larger possibility spaces on smaller teams.
Will it be used for absolutely every problem? No - There are clearly places where humans are needed.
But rejecting the enormous impact this will have on the workforce is trading hype goggles for a bucket of sand.
This passage forces me to concluse that this comment is sarcasm. Neither IDEs nor the use of Stack Overflow is anywhere near a requirement for being a professional programmer. Surely you realize there are people out there who are happily employed while still using stock Vim or Emacs? Surely you realize there are people out there who solve problems simply by reading the docs and thinking deeply rather than asking SO?
The usage of LLM assistance will not become a requirement for employment, at least not for talented programmers. A company gating on the use of LLMs would be preposterously self-defeating.
I don't think you should use LLMs for something you can't master without.
> will find themselves unable to compete
I'd wait a bit more before concluding so affirmatively. The AI bubble would very much like us to believe this, but we don't yet know very well the long term effects of using LLMs on code, both for the project and for the developer, and we don't even know how available and in which conditions the LLMs will be in a few months as evidenced by this HN post. That's not a very solid basis to build on.
To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I think we simply don't have similar mental models for predicting the future.
We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.
What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.
> With as much capital as is going into
Yes, we are in a bubble. And some are predicting it will burst.
> the continued innovation
That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.
> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.
I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.
But that's not my strongest reason to avoid the LLMs anyway:
- I don't want to increase my reliance on SaaS (or very costly hardware)
- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).
[1] https://www.sciencedirect.com/science/article/pii/S016649722...
AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.
As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!
Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...
That's quite uncharitable.
I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.
> AI presently has a far lower footprint on the globe than [X]
We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?
For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).
And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.
And I'm all for stopping the meat disaster as well.
> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Yep :-)
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.
Okay, I think I got your intent better, thanks for clarifying.
You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.
But I hear you, grounded perspectives would be a positive.
> That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
I hear you as well, makes perfect sense.
OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.
The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.
We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.
Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.
(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)
There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.
I think that the latter type might just have a bit of a bias in this matter?
I'm not sure, I frequently use LLMs for well-scoped math-heavy functions (mostly for game development) where I don't neccessarly understand what's going on inside the function, but I know what output I expect given some inputs, so it's easy for me to kind of blackbox test it with unit tests and iterate on the "magic" inside with an LLM.
I guess if I really stopped and focused on math for a year or two I'd be able to code that myself too, but every time I tried to get deeper into math it's either way too complex for me to feel like it's time well spent, and it's also boring. So why bother?
I didn't have such cases in mind, was replying to the "navigate complexity at scales human cognition wasn't designed for" aspect.
I agree, but it's not mine.
The use cases of these GPT tools are extremely limited. They demo well and are quite useful for highly documented workflows (E.G. they are very good at creating basic HTML/JS layouts and functionality).
However, even the most advanced GPT tools fall flat on their face when you start working with any sort of bleeding edge, or even just less-ubiquitous technology.
The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.
The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.
Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Godot with AI was definitely a worse experience than usual for me. I did not use the Godot editor. It seems like the development flow for Godot however is based around it. Scenes were generated through a Python script, which was of course written by Claude Code. Personally, I reviewed no line of code during the process.
My findings afterwards are;
1) Code quality was not good. Personally I have a year of experience working with Unity and online the code examples tend to be of incredibly poor quality. My guess is if AI is trained on the online corpus of game development forums, the output should be absolutely terrible. For the field of game development especially AI is tainted with this poor quality. It did indeed not follow modern practices, even after having hooked up a context MCP which provides code examples.
2) It was able to refactor the codebase to modern practices upon instructing it to; I told it to figure out what modern practices were and to apply them; it started making modifications like adding type hints and such. Commonly you would use predefined rules for this with an LLM tool, I did not use any for my experiment. That would be a one-time task after which the AI will prefer your way of working. An example for Godot can be found here: https://github.com/sanjeed5/awesome-cursor-rules-mdc/blob/ma...
3) It was very difficult to debug for Claude Code. The platform seems to require working with a dedicated editor, and the flow for debugging is either through that editor or by launching the game and interacting with it. This flow is not suitable at the moment for out of the box Claude Code or similar tools which need to be able to independently verify that certain functions or features work as expected.
> Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.
Not really - I work on developer experience and internal developer platforms. That is 80~90% Python, Go, Bash, Terraform and maybe a 10~20% Typescript with React depending on the project.
With Claude I can even write IC10 code. (with a bit of help and understanding of how Claude works)
IC10 is a fictional, MIPS-like CPU in the game Stationeers. So that's pretty promising for most other things.
I just use "AI" instead of Google/SO when I need to find something out.
So far it mostly answers correctly, until the truthful answer comes close to "you can't do that". Then it goes off the rails and makes up shit. As a bonus, it seems to confuse related but less popular topics and mixes them up. Specific example, it mixes couchdb and couchbase when I ask about features.
The worst part is 'correctly' means 'it will work but it will be tutorial level crap'. Sometimes that's okay, sometimes it isn't.
So it's not that it doesn't work for my flow, it's that I can't trust it without verifying everything so what flow?
Edit: there's a codebase that i would love to try an "AI" on... if i wouldn't have to send my customer's code to $random_server with $random_security with $untrustable_promises_of_privacy. Considering how these "AI"s have been trained, I'm sure any promise that my code stays private is worth less than used toilet paper.
Gut feeling is the "AI" would be useless because it's a not-invented-here codebase with no discussion on StackOverflow.
Human cognition wasn't designed to make rockets or AIs, but we went to the moon and the LLMs are here. Thinking and working and building communities and philosophies and trust and math and computation and educational institutions and laws and even Sci Fi shows is how we do
We also killed quite a few astronauts.
But the loss of their lives also proves a point: that achievement isn't a function of intelligence but of many more factors like people willing to risk and to give their lives to make something important happen in the world. Loss itself drives innovation and resolve. For evidence, look to Gene Kranz: https://wdhb.com/wp-content/uploads/2021/05/Kranz-Dictum.pdf
https://en.wikipedia.org/wiki/Rogers_Commission_Report#Flawe...
> Loss itself drives innovation and resolve
True, but did NASA in 1986 really need to learn this lesson?
This isn't (just) rocket science, it's the fundamentals of risk liability, legality and process that should be well established in a (quasi-military) agency such as this.
They knew they were taking some gambles to try to catch up in the Space Race. The urgency that justified those gambles was the Cold War.
People have a social tendency to become complacent about catastrophic risks when there hasn't been a catastrophe recently. There's a natural pressure to "stay chill" when the people around you have decided to do so. Speaking out about risk is scary unless there's a culture of people encouraging other to speak out and taking the risks seriously because they all remember how bad things can be if they don't.
Someone actually has to stand up and say "if something is wrong I really actually want to and need to know." And the people hearing that message have to believe it, because usually it is said in a way that it is not believed.
I see people starting to unlearn working by themselves rapidly and becoming dependant on GPT, making themselves quite useless in the process. They no longer understand what they're working with and need the help from the tool to work. They're also entirely helpless when whatever 'AI' tool they use can't fix their problem.
This makes them both more replaceable and less marketable than before.
It will have and already has a huge impact. But it's kinda like the offshoring hype from a decade ago. Everyone moved their dev departments to a cheaper country, only to later realize that maybe cheap does not always mean better or even good. And it comes with a short term gain and a long term loss.
But the big thing is using AI to learn new things, explain some tricky math in a paper I am reading, help brain storm, etc. The value of AI is in improving ourselves.
To me this seems to be the single most valuable use case of newer "AI tools"
> generating a Bash shell script quickly
I do this very often, and to me this seems to me the second most valuable use case of newer "AI tools"
> The value of AI is in improving ourselves
I agree completely.
> help brain storm
This strikes me as very concerning. In my experience, AI brainstorming ideas are exceptionally dull and uninspired. People who have shared ideas from AI brainstorming sessions with me have OVERWHELMINGLY come across as AI brained dullards who are unable to think for themselves.
What I'm trying to say is that Chat GPT and similar tools are much better suited for interacting with closed systems with strict logical constraints, than they are for idea generation or writing in a natural language.
Really, it is like students using AI: some are lazy and expect it to do all the work, some just use it as a tool as appropriate. Hopefully I am not misunderstanding you and others here, but I think you are mainly complaining about lazy use of AI.
but you're right that "I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not." could have multiple other readings too.
Let's be fair - I made it intentionally a little provocative :)
What I might not have mentioned is that I've spent the last 5 years and 20,000 or so hours building an IDE from scratch. Not a fork of VSCode, mind you, but the real deal: a new "kernel" and integration layer with abilities that VSCode and its forks can't even dream of. It's a proper race and I'm about to drop the hammer on you.
Nobody knows how this will play out yet. Reality does not care about your feelings, unfortunately.
But on the other hand there is the other end who think AGI coming in a few months and LLMs are omniscient knowledge machines.
There is a sweet spot in the middle.
If online models aren't your thing twinny.dev + ollama will make it fully local.
a developer using AI in a low-cost region will replace any developer in a high cost region ;)
One thing I miss for the other users, i.e. the casual users that never use anywhere near of their quota, is rollover. If you haven't used your quota this month, the unused will roll over to the next month.
Even better: provide a counter displaying both remaining usage available and the quota reset time.
But companies probably earn so much money from the vast majority of users that having good and clear limits would only empower them to actually benefit as much from the product as they can.
Every company wants the marketing of unlimited, but none of them want the accountability.
The AI models have a bunch of different consumption models aimed at different types of use. I work at a huge company, and we’re experimenting with different ways of using LLMs for users based on different compliance and business needs. The people using all you can eat products like NotebookLM, Gemini, ChatGPT use them much more on average and do more varied tasks. There is a significant gap between low/normal/high users.
People using an interface to a metered API, which offers a defined LLM experience consume fewer resources and perform more narrowly scoped tasks.
The cost is similar and satisfaction is about the same.
There is no such thing as "unlimited" or "lifetime" unless it's self-hosted.
yep
>Adverse selection has been discussed for life insurance since the 1860s,[3] and the phrase has been used since the 1870s.[4]
This is somewhat a different issue that’s largely accepted by courts and society bar that one neighbour who is incensed they can’t run a rack off their home internet that was marketed unlimited.
In some cases, people discover creative ways to resell the service. Anthropic mentioned they suspect this was happening.
The weirdest part about this whole internet uproar, though, is that Anthropic never offered unlimited usage. It was always advertised as higher limits.
Yet all the comment threads about it are convinced it was unlimited and now it’s not. It’s weird how the internet will wrap a narrative around a story like this.
Or the American Airlines lifetime pass.. https://www.aerotime.aero/articles/american-airlines-unlimit...
I thought I had a low usage with my 1.5 years' worth saved. Only reason I paysfor that plan is anything lower and my provider does not offer rollover.
Eg here in slovenia, if you want unlimited calls and texting, you get 150GB in your "package" for 9.99eur, but you somehow can't save that data for the next month.
https://www.hot.si/ponudba/paketi.html (not affiliated)
In the same way your next-door supermarket has effectively "infinite soup cans" for the needs of most people.
But I guess some people do really need "Eggs: contain eggs" in their egg carton otherwise they will throw a legal fit
When you order the second plate, it comes without the sauce and it tastes flatter. You're filled at this point and you can't order the third.
Very creative and fun if you ask me. I was prepared for this though, because the people we went together said how it's going to go, exactly.
Nothing in our world is truly unlimited. Digital services and assets have different costs than their physical counterparts, but that just means different limits, not a lack of them. Electrical supply, compute capacity, and storage are all physical things with real world limits to how much they can do.
These realities eventually manifest when someone tries to build an "unlimited" service on top of limited components, similar to how you can't build a service with 99.999% reliability when it has a critical piece that can only get to 99.9%.
> Stop selling "unlimited", when you mean "until we change our minds"
The limits don't go in to affect until August 28th, one month from yesterday. Is there an option to buy the Max plan yearly up front? I honestly don't know; I'm on the monthly plan. If there isn't a yearly purchase option, no one is buying unlimited and then getting bait-and-switched without enough time for them to cancel their sub if they don't like the new limits.
> A Different Approach: More AI for Less Money
I think it's really funny that the "different approach" is a limited time offer for credits that expire.
I don't like that the Claude Max limits are opaque, but if I really need pay-per-use, I can always switch to the API. And I'd bet I still get >$200 in API-equivalents from Claude Code once the limits are in place. If not? I'll happily switch somewhere else.
And on the "happily switch somewhere else", I find the "build user dependency" point pretty funny. Yes, I have a few hooks and subagents defined for Claude Code, but I have zero hard dependency on anything Anthropic produces. If another model/tool comes out tomorrow that's better than Claude Code for what I do, I'm jumping ship without a second thought.
The field is moving so fast that whatever was best 6 months ago is completely outdated.
And what is top tier today, might be trash in a few months.
Services are not the same thing as physical goods.
- note: "unlimited" does not mean free.
quote source: "Apple Just Found a Way to Sell You Nothing" https://www.youtube.com/watch?v=ytkk5NFZGjs
Repairs have always come with deductibles.
This is standard in virtually every insurance program. There are a lot of studies showing that even the tiniest amount of cost sharing completely changes how people use a service.
When something is unlimited and free, it enticed people to abuse it in absurd ways. With hardware, you would get people intentionally damaging their gear to get new versions for free because they know it costs them nothing.
https://www.forbes.com/sites/barrycollins/2024/11/28/mac-own...
Don't blame the company, it acts within boundaries allowed by its paying customers, and apple customers are known to be... much less critical of the company and its products to be polite, especially given its premium prices.
This is patently false and has been for the whole existence of Apple. Apple customers are voraciously critical of the company. Just probably not under the delta of importance that you consider.
There is a case to be made that they sold a multiple and are changing x or rate limiting x differently, but the tone seems different from that.
They appear to have removed reference to this 50-session cap in their usage documents. (https://gist.github.com/eonist/5ac2fd483cf91a6e6e5ef33cfbd1e...)
So even if these mystery people Anthropic reference who did run it "in the background, 24/7", they still would've had to stay within usage limits.
It always had limits and those limits were not specified as concrete numbers.
It’s amazing how much of the internet outrage is based on the idea that it was unlimited and now it’s not. The main HN thread yesterday was full of comments complaining about losing unlimited access.
It’s so weird to watch people get angry about thinking they’re losing something they never had. Even Anthropic said less than 5% of accounts would even notice the new limits, yet I’ve seen countless comments raging that “everyone must suffer” due to the actions of a few abusing the system.
Some facts for sanity:
1- The poster of this blog article is Kilocode who makes a (worse) competitor to Claude Code. They are definitely capitalizing on this drama as much as they can. I’ve been getting hit by Reddit ads all day from Kilocode, all blasting Anthropic, with the false claim that their plan was "unlimited".
2- No one has any idea yet what the new limits will be, or how much usage it actually takes to be in the top 5% to be affected. The limits go into effect in a few days. We'll see then if all the drama was warranted.
no, even their announcement blog[0] said:
> With up to 20x higher usage limits
in the third paragraph.
0: https://www.anthropic.com/news/max-plan
Can you really ever compete when you are renting someone else's GPUs?
Can you really ever compete when you are going up against custom silicon built and deployed at scale to run inference at scale (i.e. TPUs built to run Gemini and deployed by the tens-of-thousands in data centers around the globe)?
Meta and Google have deep pockets and massive existing world-class infrastructure (at least for Google, Meta probably runs their php Facebook thing on a few VPS dotted around in some random colos /s ) . They've literally written the book on this.
It remains to be seen how much more money OpenAI can burn, but we've started to see how much Anthropic can burn if nothing else.
When companies sell unlimited plans, they’re making a bet that the average usage across all of those plans will be low enough to turn a profit.
These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
Anthropic never sold an unlimited plan
It’s amazing that so many people think there was an unlimited plan. There was not an unlimited plan.
> These people “abusing” the plan are well within their right to use the API as much as they want. It just didn’t fall into the parameters Anthropic had expected.
Correct! And they did. And now Anthropic is changing those limits in a month.
> LLM subscriptions need to go away, why can’t we just pay as we go? It’s the fairest way for everyone.
This exists. You use the API. It has always been an option. Again, I’m confused about why there’s so much anger about something that already exists.
The subscriptions are nice for people who want a consistent fee and they get the advantage of a better deal for occasional heavy usage.
I'm told the $200/month plan was practically unlimited, I heard you could leave ~10 instances of Claude Code running 24/7. I will never pay for any of these subscriptions however so I haven't verified that.
>And now Anthropic is changing those limits in a month.
Which indicates the seller was being scammed. Now they're changing the limits so it swings back to being a scam for the user.
>I’m confused about why there’s so much anger about something that already exists
Yes but much LLM tooling requires a subscription. I'm not talking only about Anthropic/Claude Code. I can't use chatgpt.com using my own API key. Even though behind the scenes, if I had a subscription, it would be calling out to the exact same API.
I would not personally, as I can't spend thousands per month on an agentic tool. I hope they figure out limits that work. $100 / $200 is still a great deal. And the predictability means my company will pay for it.
Unlimited plans encourage wasting resources[0]. By actually paying for what you use, you can be a bit more economical and still get a lot of mileage out of it.
$100/$200 is still a great deal (as you said), but it does make sense for actually-$2000 users to get charged differently.
0: In my hometown, (some) people have unlimited central heating (in winter) for a fixed fee. On warmer days, people are known to open windows instead of turning off the heating. It's free, who cares...
Because Claude Code is absolutely impossible to use without a subscription? I’m fine with being limited, but I’m not with having to pay more than $200/month
Anybody that feels they’re not getting enough out of their subscription is welcome to use API instead.
Claude Code accepts an API key. You do not need a subscription
https://docs.anthropic.com/en/docs/claude-code/settings#envi...
When some users burn massive amounts of compute just to climb leaderboards or farm karma, it’s not hard to imagine why providers might respond with tighter limits—not because it's ideal, but because that kind of behavior makes platforms harder to sustain and less accessible for everyone else. Because on the other hand a lot of genuine customers are canceling because they get API overload message after paying $200.
I still think caps are frustrating and often too blunt, but posts like that make it easier to see where the pressure might be coming from.
[1] https://www.reddit.com/r/ClaudeAI/comments/1lqrbnc/you_deser...
Surely they thought about 'bad users' when they released this product. They can't be that naive.
Now that they have captured developer mindshare. users are bad.
what was the bait and switch? where in the launch announcement (https://www.anthropic.com/news/max-plan) did they suggest it provided unlimited inference?
why is anthropic tweeting about 'naughty users that ruined it for everyone' ?
Switch: "We limited your usage weekly and monthly. You don't know how those limits were set, we do but that's not information you need to know. However instead of choosing to hoard your usage out of fear of hitting the dreaded limit again, you've kept it again and again, using the product exactly the way it was intended to and now look what you've done."
they launched Claude Max (and Pro) as being limited. it was limited before, and it's limited now, with a new limit to discourage 24/7 maxing of it.
in what way was there a bait and switch?
The transparency problem compounds this. The sustainable path forward likely involves either much more transparent/clear usage-based pricing or significantly higher flat rates that actually cover heavy usage.
Claude 4 is good enough that people will pay whatever they ask as long as it's significantly less than the cost of doing it by hand. The loss leaders will need to fade away to manage the demand, now that there is significant value.
So, not unlimited? Like, if the abuse is separate from amount of use (like reselling; it can be against ToS to resell it even in tiny amounts) then sure, but if you're claiming "excessive" use is "abuse", then it is by any reasonable definition not unlimited.
Correct, not “unlimited” as in the dictionary definition of unlimited. Unlimited as in the plain meaning of unlimited as it is commonly used this subject matter area. i.e., Use it reasonably or hit the bricks, pal.
If there is a clear limit to that (and it seems there is now), then stop saying "unlimited" and start selling "X queries per day". You can even let users pay for aditional queries if needed.
(yes i know queries is not a proper term to use here, but the principle stands)
"The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption."
"It's not rocket science. It’s our way of attracting users. Not bait and switch, but credits to try."
> The real damage: You're not frustrating 5% of users—you're breaking trust with the exact people who drive growth and adoption.
> When developers get "rate limit exceeded" while debugging at 2 AM, they're not thinking about your infrastructure costs—they're shopping for alternatives.
Notice a pattern here?
No comments yet
[0]: https://www.viberank.app/
Anthropic never sold Max plans as unlimited. There are two tiers, explicitly labeled "5x" and "20x", both referring to the increased usage over what you get with Pro. Did all the people complaining that Anthropic reneged on their "promise" of unlimited usage not read anything about what they were signing up to pay $100 or $200/month for? Or are they not even customers?
It was never unlimited.
They never advertised unlimited usage. The Max plan clearly said it had higher limits.
This fabrication of a backstory is so weird. Why do so many people believe this?
And to be clear, the users abusing the "unlimited" rates they were offering to do absolutely nothing productive (see vibe-coding subreddits) are no better.
Claude Max, to my knowledge, was never marketed as "unlimited". Claude Max gives you WAY more tokens then $100/$200 would buy. When you get rate limited, you have the option to just use the API. Overall, you will have gotten more value than just using the API alone.
And you always had, and continue to have, the option of just using the API directly. Go nuts.
The author sounds like a petulant child. It's embarrassing, honestly.
But it is so hard to explain to product people, that there is a limit how much certain services can scale and be profitably supported.
Everyone was better off without the deception. Now we are in the early days of AI. Providers should be honest but won’t until forced to.
Because just think about it. Unlimited is untenable. Another example, in the early days of broadband in Australia a friend’s parents were visited by a Telstra manager because he “downloaded more than his entire suburb”. A manager!
Really you can’t blame the providers; some users will ruin it for everyone. I am not saying that is anyone specific. But none of this should surprise us. We’ve been here before. Just look back at how other markets developed & you will see patterns that tell you what’s next.
Gemini did go from a huge free tier to 100 free uses a day, but I expected that.
EDIT: let me clarify: I just retired after over 50 very happy years working as a software developer and researcher. My number one priority was always self-improvement: learning new things and new skills that incidentally I could sometimes use to make money for whoever was paying me. AI is awesome for learning and general intellectual pursuits, and pairs nicely with reading quality books, listening to lectures on YouTube, etc.
I promise I’m not being snarky here - I don’t understand how people are burning through their $200/mo plan usage so quickly. Are they spamming prompts? Not using planning mode? I’ve seen a few folks running multiple instances at once… is that more common than I think?
Just below me as I type there's a comment saying they're refusing to cancel a subscription (may not be below me any more when I finish typing).
Somewhere lower there's a comment saying they do not show the full price when you subscribe, but add taxes on top of it and leave you to notice the surprise on your credit card statement.
Is there an ethical "AI" service anywhere?
Differentiation through honesty: In a market full of fluff, directness stands out. Customers might respect a brand more for telling the truth plainly, even if the truth isn’t ideal.
The risk: It could scare off some customers who don’t read the fine print anyway. But that may not be a loss—it might actually filter in the right kind of customer, the one who wants to know what they’re really getting.
Let people cook and give them some time find out how to do this. Voice discontent but don't be an asshole.
And how does this compare to case with "Unlimited". Overall will the total used be higher or lower?
No comments yet
What's the alternative when every other vendors (eventually) have the same limit?
In subscription plans, users who aren't using 100% of their subscription subsidize other users, which is opaque and not really fair.
_________ *here's where we tell you the limits we said we didn't have
some say they have to define a huge limit and that's it.
Limits are sometime hard to define :
- they must be such huge, a user (human) finaly understand it's unlimited else he will compare to competitors
- but no such huge because the 0.1% of users will try to reach it
A fair word could be the one which categorize the type of use :
- human : human has physical limit (e.g: typing word to keyboard / per time).
- bot : from 1 arduino to heavyweight hardcore clusting, virtualy no limits.
they just sells you at 20x more usage limit???? nothing tells me unlimited
“Claude is promising unlimited and it isn’t sustainable.”
“For a limited time only pay us $20 and get $80 worth of credits.”
Look at what these people on HN said!
Come on.
I hold them no ill will for rapidly changing pricing models, raising pricing, doing whatever they need to do in what must be a crazy time of finding insane PMF in such a short time
BUT the communication is basically inexcusable IMO. I don't know what I'm paying for, I don't know how much I get, their pricing and product pages have completely different information, they completely hide the fact that Opus use is restricted to the Max plan, they don't tell you how much Opus use you get, their help pages and pricing pages look they were written by an intern and pushed directly to prod. I find out about changes on Twitter/HN before I hear about them from Anthropic.
I love the Claude Code product, but Anthropic the company is definitely nudging me to go back to OpenAI.
This is also why competition is great though - if one company had a monopoly the pricing and UX would be 20x worse.
How am I supposed to bait people into my product to screw them up then?
Find a better alternative.
Thoughts? If you want me to, I can elaborate on what I really mean, but I hope it was understandable enough.
Was trying to "analogize" to "unlimited".-
https://www.youtube.com/watch?v=NOX2C1UMxL0
why the whole users need to suffer???
I don’t know why you think everyone is going to suffer.
Unlimited for startups work better because they have zero idea on load challenges that come in the future. And they don’t have much idea how well their product will be taken in the market.
Anthropic got the experience and decided they needed to maximize on reasonableness over customer trust. And they are a startup so we all get this.
OTOH there is no such thing as unlimited. Atoms in the universe are finite. Your use is finite. Your time is finite. Your abuse is limited and finite. You are a sucker for believing in the unlimited myth just like think others are suckers for believing in divine intervention or conspiracy theorists are suckers to believe in unlimited power.
Philosophical meandering and blaming the customer for not understanding company's shady marketing is not something I'd consider to be cool.
For Claude Code and similar services, we’re still in the very early stages of the market. We’re using AI almost for free right now. It’s clear this isn’t sustainable. The problem is that they couldn’t even sustain it at this earliest stage.
wait until their investors get fed up with pouring money down the drain and demand they make a profit from the median user
that model training and capex to build the giant DCs and fill them with absurdly priced nvidia chips isn't free
as an end user: you will be the one paying for it
On a more serious note, I'm sure most of the people can't fathom or even think about the resources they are consuming when using AI tools. This things doesn't use energy, they consume it like how a black hole sucks light.
In some cases, your queries can consume your home's daily energy needs in a hour or so.
But hey, this is just a sales pitch from one company I wouldn't trust by taking a dump on another company I wouldn't trust.
> You can tell how it's intentional with both OpenAI and Anthropic by how they're intentionally made opaque. I cant see a nice little bar with how much I've used versus have left on the given rate limits
Well, that's a scam. A legal one.
you can see here in this Reddit thread from April, when Claude Max was launched, that it was explicitly explained as being limited: https://www.reddit.com/r/ClaudeAI/comments/1jvbpek/breaking_...
Max is described as "5-20x more than Pro", clearly indicating both are limited.
here's their launch blog post: https://www.anthropic.com/news/max-plan
> The new Max plan delivers exactly that. With up to 20x higher usage limits, you can maintain momentum on your most demanding projects with little disruption.
obviously everyone wants everything for free, or cheap, and no one wants prices to change in a way that might not benefit them, but the endless whinging from people about how unfair it is that anthropic is limiting access to their products sold as coming with limited access is really extremely tedious even by HN standards.
and as pointed out dozens of times in these threads, if your actual worry is running out of usage in a week or month, Anthropic has you covered - you can just pay per token by giving Claude Code an API key. doing that 24/7 will cost ~100x what Max does though, I wonder if that's a useful bit of info about the situation or not?
Otherwise it’s just a change in the offering. You can unsubscribe freely.
This sort of entitlement puts me off. Prices for things change all of the time.