Any claims about functionality with coding assistants should be looked upon skeptically without including the prompts, IMO.
It could very well be that Cursor isn’t very helpful. It could also be the case that the person prompting is not providing enough context for the problem at hand.
It’s impossible to tell without the full chat history.
mystraline · 2h ago
People who hype IDEs usually lack technical skills.
People who hype autocomplete usually lack technical skills.
People who hype memory-safe languages usually lack technical skills.
People who hype compilers usually lack technical skills.
There was some wordplay, which is just attacking the current technology that makes ttechnology more attainable by more people. However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that. But the really bad side is when the LLM companies put pricetags to use the 'think-for-you' machine. And that represents a great de-skilling and anti-critical-thought.
I'm not saying "don't use LLMs". I am saying to run them yourselves, and learn how to work with them as an interactive encyclopedia, and also not to let them have core control over intellectual thought.
yladiz · 2h ago
> However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that.
Not sure what you mean. Using a local LLM or a cloud one can get you addicted in the same way?
mystraline · 2h ago
Using a local LLM opens all the guarded secrets bare.
For example, you can run multiple LLMs to understand each outputs.
You can issue system messages (commands) to do specific actions, like ignore arbitrary moralities, or to process first and second derivative results.
By running and commanding an LLM locally, you become a actual tool-user, rather than a service user at the whim of whatever the company (ChatGPT, etc) wishes.
cjoelrun · 48m ago
Useful LLM architectures for working on complex codebases seem to be out the reach of consumer hardware.
yoyohello13 · 2h ago
Please read the article first. This is not “AI bad” despite what the title may imply.
I do think we are in a transitional period now. Eventually all editors will have the same agentic capability. Thats why editor agnostic tools like Claude code and aider are much more exciting to me.
mattnewton · 8m ago
I read the article and was more confused. The author seemed to make a lot of assumptions about cursor without actually trying it and then used those assumptions to justify not trying it.
ko_pivot · 2h ago
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
Not sure that this is true. cursor agent mode is different from cursor code completion and cursor code completion is a legitimately novel model I believe.
alook · 2h ago
They definitely do train their own models, the founders have described this in several interviews.
I was surprised to learn this, but they made some interesting choices (like using sparse mixture-of-experts models for their tab completion model, to get high throughput/low latency).
Originally i think they used frontier models for their chat feature, but I believe theyve recently replaced that with something custom for their agent feature.
varsketiz · 2h ago
I'm really curious in what problems the codebases of startups of today will have in a few years. The internet is already full of memes about working with legacy code. What will be the legacy codebases where half of the code is generated with AI tools?
trowawee · 1h ago
They'll be trash, but after a decade bouncing around startups, that's not exactly a problem unique to LLMs. There's probably going to be more startups with more trash than there used to be, but hey: that's job security.
mellosouls · 1h ago
Ironically, considering the title, this seems like a junior dev/young man take.
A little too sure of itself, incautious and overly generalising.
mattnewton · 2h ago
Cursor does actually train their own models, most importantly models that apply edits to files, and models that assemble context for the llm agents. Has the author actually used the tools they are writing about?
Cursor/windsurf/roo/kilo/zed are about smoothing over rough edges in actually getting work done with agenetic models. And somewhat surprisingly, the details matter a lot in unlocking value.
rattlesnakedave · 2h ago
Low quality clickbait article. “I like my editor because I’m used to it” ok man, do you want an award? The claims about the limitations of vscode and cursor’s code navigation abilities aren’t even accurate. The author just doesn’t know how to use them. There’s a reason it’s popular, and it’s not “everyone is dumber and less talented than me.”
rexarex · 2h ago
Cursor with Gemini 2.5 pro has been really great.
tmpz22 · 2h ago
Big fan of Gemini 2.5 + Cursor but its far from a panacea.
After using Cursor heavily the past few weeks I agree with the authors points. The ability to work outside of Cursor/AI is paramount within small software teams because you will periodically run into things it can't do - or worse it will lead you in a direction that wastes a lot of developer time.
Cursor will get better at this over time, and the underlying model, but the executive vision of this is absolutely broken and at this point I can only laugh at the problems this generation of startups will inevitably go through when they realize their teams no longer have the expertise to solve things in more traditional manners.
hliyan · 2h ago
I installed and played around with Cursor for all of perhaps 2 hours before giving up in disgust. The first few generations are generally quite good. After that, problems started to stack up so exponentially that I found myself rubbing my temples. You would probably have the same reaction if you could read the generated code. I decided it's better to stick to my current approach of using LLMs as an expert system, helping me figure out which functions, libraries, algorithms, data structures or patterns to use, and occasionally asking them to write a standalone function.
hamburglar · 2h ago
I get it. I’m supposed to not take advantage of a very powerful (yet often flawed) tool because I am insecure about my technical skills cred. Gotcha.
Asraelite · 2h ago
People Who Downplay Cursor Usually Lack the Skills to Utilize It Properly /s
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
The difference is in the tooling around the models: codebase indexing, docs, MCP servers, rules, linter error feedback, and agents that automatically incorporate all of those other things together. If you don't use all that, then the models will only reach a fraction of their potential usefulness.
I agree that Cursor is overhyped by some, but it sounds like the author hasn't given it a fair chance.
Taek · 2h ago
Anecedotally, this has not been my experience at all. Several of the strongest coders I know use cursor and love it. (coders who have been top of their field since before ChatGPT was a thing)
It could very well be that Cursor isn’t very helpful. It could also be the case that the person prompting is not providing enough context for the problem at hand.
It’s impossible to tell without the full chat history.
People who hype autocomplete usually lack technical skills.
People who hype memory-safe languages usually lack technical skills.
People who hype compilers usually lack technical skills.
There was some wordplay, which is just attacking the current technology that makes ttechnology more attainable by more people. However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that. But the really bad side is when the LLM companies put pricetags to use the 'think-for-you' machine. And that represents a great de-skilling and anti-critical-thought.
I'm not saying "don't use LLMs". I am saying to run them yourselves, and learn how to work with them as an interactive encyclopedia, and also not to let them have core control over intellectual thought.
Not sure what you mean. Using a local LLM or a cloud one can get you addicted in the same way?
For example, you can run multiple LLMs to understand each outputs.
You can issue system messages (commands) to do specific actions, like ignore arbitrary moralities, or to process first and second derivative results.
By running and commanding an LLM locally, you become a actual tool-user, rather than a service user at the whim of whatever the company (ChatGPT, etc) wishes.
I do think we are in a transitional period now. Eventually all editors will have the same agentic capability. Thats why editor agnostic tools like Claude code and aider are much more exciting to me.
Not sure that this is true. cursor agent mode is different from cursor code completion and cursor code completion is a legitimately novel model I believe.
I was surprised to learn this, but they made some interesting choices (like using sparse mixture-of-experts models for their tab completion model, to get high throughput/low latency).
Originally i think they used frontier models for their chat feature, but I believe theyve recently replaced that with something custom for their agent feature.
A little too sure of itself, incautious and overly generalising.
Cursor/windsurf/roo/kilo/zed are about smoothing over rough edges in actually getting work done with agenetic models. And somewhat surprisingly, the details matter a lot in unlocking value.
After using Cursor heavily the past few weeks I agree with the authors points. The ability to work outside of Cursor/AI is paramount within small software teams because you will periodically run into things it can't do - or worse it will lead you in a direction that wastes a lot of developer time.
Cursor will get better at this over time, and the underlying model, but the executive vision of this is absolutely broken and at this point I can only laugh at the problems this generation of startups will inevitably go through when they realize their teams no longer have the expertise to solve things in more traditional manners.
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
The difference is in the tooling around the models: codebase indexing, docs, MCP servers, rules, linter error feedback, and agents that automatically incorporate all of those other things together. If you don't use all that, then the models will only reach a fraction of their potential usefulness.
I agree that Cursor is overhyped by some, but it sounds like the author hasn't given it a fair chance.