Notes on rolling out Cursor and Claude Code

86 jermaustin1 12 5/8/2025, 4:34:39 PM ghiculescu.substack.com ↗

Comments (12)

aerhardt · 1h ago
> So far the biggest limiting factor is remembering to use it. Even people I consider power users (based on their Claude token usage) agree with the sentiment that sometimes you just forget to ask Claude to do a task for you, and end up doing it manually. Sometimes you only notice that Claude could have done it, once you are finished. This happens to me an embarrassing amount.

Yea, this happens to me too. Does it say something about the tool?

It's not like we are talking about luddites who refuse to adopt the technology, but rather a group who is very open to use it. And yet sometimes, we "forget".

I very rarely regret forgetting. I feel a combination of (a) it's good practice, I don't want my skills to wither and (b) I don't think the AI would've been that much faster, considering the cost of thinking the prompt and that I was probably in flow.

emeraldd · 16m ago
If you're forgetting to use the tool, is the tool really providing benefit in that case? I mean, if a tool truly made something easier or faster that was onerous to accomplish, you should be much less likely to forget there's a better way ...
jaapbadlands · 6m ago
There's a balance to be calculated each time you're presented with the option. It's difficult to predict how much iteration the agent is going to require, how frustrating it might end up being, all the while you lose grip on the code being your own and your head-model of it, vs just going in and doing it and knowing exactly what's going on and simply asking it questions if any unknowns arise. Sometimes it's easier to just not even make the decision, so you disregard firing up the agent in a blink.
NitpickLawyer · 1h ago
> The most common thing that makes agentic code ugly is the overuse of comments.

I've seen this complaint a lot, and I honestly don't get it. I have a feeling it helps LLMs write better code. And removing comments can be done in the reading pass, somewhat forcing you to go through the code line by line and "accept" the code that way. In the grand scheme of things, if this were the only downside to using LLM-based coding agents, I think we've come a long way.

manojlds · 26m ago
Yeah that's what I do, remove the comments as I read through.
christophilus · 7m ago
As someone who really dislikes using Cursor, what does the HN hivemind think of alternatives? Is there a good CLI like Claude Code but for Gemini / other models? Is there a good Neovim plugin that gets the contextual agent mode right?
hallh · 1h ago
Having linting/prettifying and fast test runs in Cursor is absolutely necessary. On a new-ish React Typescript project, all the frontier models insist on using outdated React patterns which consistently need to be corrected after every generation.

Now I only wish for an Product Manager model that can render the code and provide feedback on the UI issues. Using Cursor and Gemini, we were able to get a impressively polished UI, but it needed a lot of guidance.

> I haven’t yet come across an agent that can write beautiful code.

Yes, the AI don't mind hundreds of lines of if statements, as long as it works it's happy. It's another thing that needs several rounds of feedback and adjustments to make it human-friendly. I guess you could argue that human-friendly code is soon a thing of the past, so maybe there's no point fixing that part.

I think improving the feedback loops and reducing the frequency of "obvious" issues would do a lot to increase the one-shot quality and raise the productivity gains even further.

kubav027 · 15m ago
Unless you are prototyping human-friendly code is a must. It is easy to write huge amounts of low quality code without AI. Hard part is long term maintenance. I have not seen any AI tool helping with that.
datadrivenangel · 2h ago
"Making it easy to run tests with a single command. We used to do development & run tests via docker over ssh. It was a good idea at the time. But fixing a few things so that we could run tests locally meant we could ask the agent to run (and fix!) tests after writing code."

Good devops practices make AI coding easier!

tptacek · 1h ago
This is one of the most exciting things about coding agents: they make a lot of tooling that was so tedious to use it was impractical now ultra relevant. I wrote a short post about this a few weeks ago, the idea that things like "Semgrep" are now super valuable where they were kind of marginal before agents.
kasey_junk · 41m ago
And also the payoff for “minor” improvements to be bigger.

We’ve started more aggressively linting our code because a) it makes the ai better and b) we made the ai do the tedious work of fixing our existing lint violations.

jbellis · 3h ago
Good to see experiences from people rolling out AI code assistance at scale. For me the part that resonates the most is the ambition unlock. Using Brokk to build Brokk (a new kind of code assistant focused on supervising AI rather than autocompletes, https://brokk.ai/) I'm seriously considering writing my own type inference engine for dynamic languages which would have been unthinkable even a year ago. (But for now, Brokk is using Joern with a side helping of tree-sitter.)