Notes on rolling out Cursor and Claude Code

78 jermaustin1 7 5/8/2025, 4:34:39 PM ghiculescu.substack.com ↗

Comments (7)

aerhardt · 1h ago
> So far the biggest limiting factor is remembering to use it. Even people I consider power users (based on their Claude token usage) agree with the sentiment that sometimes you just forget to ask Claude to do a task for you, and end up doing it manually. Sometimes you only notice that Claude could have done it, once you are finished. This happens to me an embarrassing amount.

Yea, this happens to me too. Does it say something about the tool?

It's not like we are talking about luddites who refuse to adopt the technology, but rather a group who is very open to use it. And yet sometimes, we "forget".

I very rarely regret forgetting. I feel a combination of (a) it's good practice, I don't want my skills to wither and (b) I don't think the AI would've been that much faster, considering the cost of thinking the prompt and that I was probably in flow.

NitpickLawyer · 1h ago
> The most common thing that makes agentic code ugly is the overuse of comments.

I've seen this complaint a lot, and I honestly don't get it. I have a feeling it helps LLMs write better code. And removing comments can be done in the reading pass, somewhat forcing you to go through the code line by line and "accept" the code that way. In the grand scheme of things, if this were the only downside to using LLM-based coding agents, I think we've come a long way.

hallh · 50m ago
Having linting/prettifying and fast test runs in Cursor is absolutely necessary. On a new-ish React Typescript project, all the frontier models insist on using outdated React patterns which consistently need to be corrected after every generation.

Now I only wish for an Product Manager model that can render the code and provide feedback on the UI issues. Using Cursor and Gemini, we were able to get a impressively polished UI, but it needed a lot of guidance.

> I haven’t yet come across an agent that can write beautiful code.

Yes, the AI don't mind hundreds of lines of if statements, as long as it works it's happy. It's another thing that needs several rounds of feedback and adjustments to make it human-friendly. I guess you could argue that human-friendly code is soon a thing of the past, so maybe there's no point fixing that part.

I think improving the feedback loops and reducing the frequency of "obvious" issues would do a lot to increase the one-shot quality and raise the productivity gains even further.

datadrivenangel · 1h ago
"Making it easy to run tests with a single command. We used to do development & run tests via docker over ssh. It was a good idea at the time. But fixing a few things so that we could run tests locally meant we could ask the agent to run (and fix!) tests after writing code."

Good devops practices make AI coding easier!

tptacek · 27m ago
This is one of the most exciting things about coding agents: they make a lot of tooling that was so tedious to use it was impractical now ultra relevant. I wrote a short post about this a few weeks ago, the idea that things like "Semgrep" are now super valuable where they were kind of marginal before agents.
kasey_junk · 6m ago
And also the payoff for “minor” improvements to be bigger.

We’ve started more aggressively linting our code because a) it makes the ai better and b) we made the ai do the tedious work of fixing our existing lint violations.

jbellis · 2h ago
Good to see experiences from people rolling out AI code assistance at scale. For me the part that resonates the most is the ambition unlock. Using Brokk to build Brokk (a new kind of code assistant focused on supervising AI rather than autocompletes, https://brokk.ai/) I'm seriously considering writing my own type inference engine for dynamic languages which would have been unthinkable even a year ago. (But for now, Brokk is using Joern with a side helping of tree-sitter.)