We've seen this complaint circulating since the ChatGPT age. People who initially thought they were getting good results: part of it was that they got lucky and experienced reversion to the mean, part of it was that they got seduced. I mean, coding agents do have some value, but they screw up a lot.
kordlessagain · 2h ago
No, you are wrong. It is confirmed acting very odd and the context windows have gotten extremely short. That is in Claude Desktop or the web client.
It is possible to gauge if it is not performing well without it being a user assumption. In my case, I can clearly see the tool use has changed. That means something happened on the backend to limit the time taken to think about things or SOMETHING. I have no idea what, but it seems to be over rotating on explaining why something is broken with excuses it has literally just been told it wasn't the issue.
Today, it disabled code to "fix" the issue instead of applying a patch that was written by Gemini. And, Claude Desktop is a buggy piece of shit. I have no idea who is in charge of that project, but they should be fired immediately. It locks up constantly, crashes frequently, drops messages during mid streaming (and deletes what you posted to it as well) and has random error messages. I've complained to support multiple times and just get stock messages back that they are working on things.
On top of all that, it shows "Claude can't run the code it writes, yet" over and over and over again at the bottom of the messages. It's a usability nightmare, is what it is.
consumer451 · 3h ago
I am aware of the "your LLM is terrible now" trope, and have called it out many times myself. It's an interesting phenomenon.
However, I posted this link for selfish reasons. I wanted to see if this experienced dev team's complaints rang true to other Claude Code users as well. I have been considering CC, but thought maybe I had missed the initial good times.
Is this not at all your experience with CC?
PaulHoule · 3h ago
I believe, one way or another, that the poster of that article is correctly evaluating the performance of CC now. Whether it was better before or whether he was looking at it with rose tinted glasses now is beside the point.
We've seen this complaint circulating since the ChatGPT age. People who initially thought they were getting good results: part of it was that they got lucky and experienced reversion to the mean, part of it was that they got seduced. I mean, coding agents do have some value, but they screw up a lot.
It is possible to gauge if it is not performing well without it being a user assumption. In my case, I can clearly see the tool use has changed. That means something happened on the backend to limit the time taken to think about things or SOMETHING. I have no idea what, but it seems to be over rotating on explaining why something is broken with excuses it has literally just been told it wasn't the issue.
Today, it disabled code to "fix" the issue instead of applying a patch that was written by Gemini. And, Claude Desktop is a buggy piece of shit. I have no idea who is in charge of that project, but they should be fired immediately. It locks up constantly, crashes frequently, drops messages during mid streaming (and deletes what you posted to it as well) and has random error messages. I've complained to support multiple times and just get stock messages back that they are working on things.
On top of all that, it shows "Claude can't run the code it writes, yet" over and over and over again at the bottom of the messages. It's a usability nightmare, is what it is.
However, I posted this link for selfish reasons. I wanted to see if this experienced dev team's complaints rang true to other Claude Code users as well. I have been considering CC, but thought maybe I had missed the initial good times.
Is this not at all your experience with CC?