Ask HN: Have you used Claude Code? Is it any good?
7mbm75/6/2025, 9:43:48 PM
If you do use it, how does it fit into your workflow?
Comments (7)
nickisnoble · 2h ago
It's still not great, but it's better than anything else.
Best results when:
1. run /init and let it maintain a CLAUDE.md
2. Ask it to run checks + tests before / after every task, and add those commands to the "no permission needed list" – this improves quality by a lot
3. Ask it to do TDD, but manually check the actual test is correct
4. Every time it finishes something solid: git commit manually, /compact context (saves hella $$$ + seems to improve focus)
Honestly I treat it like a junior programmer I'm pairing with. If you pay attention, you can catch it being stupid early and get it back on track. Best when you know exactly what you want, it's just boring work. It's really good with clear instructions, eg "Refactor X -> Y, using {design pattern}."
viraptor · 20h ago
Much worse than Cursor with Claude models in my experience. I'm getting many useless changes and things being reimplemented from scratch instead of moving files. Not impressed at all.
mbm · 19h ago
Interesting, thanks for sharing.
kasey_junk · 17h ago
Fwiw I’ve gotten really impressive results from Claude code and it’s the first time I’ve seen that with any agentic flow (including Cursor).
Sorry to give opposite anecdotes. It’s one of the things I find most irritating about AI right now.
disqard · 13h ago
I'm finding it to be a force-multiplier in my side-project work.
It needs careful oversight, for if you're too generous with it, it'll happily add tons of code into your codebase that will make it horrendously difficult to understand and debug later. As capable as it is, I find it prudent to keep it on a short leash.
muzani · 14h ago
Anecdotes are better than data, especially when that data is exclusively benchmarked on Python and code competitions instead of actual work.
tombot · 7h ago
It gets a lot better the more you adopt best practices, my first step is often just to ask it make a plan I can review before it touches anything.
Best results when:
1. run /init and let it maintain a CLAUDE.md
2. Ask it to run checks + tests before / after every task, and add those commands to the "no permission needed list" – this improves quality by a lot
3. Ask it to do TDD, but manually check the actual test is correct
4. Every time it finishes something solid: git commit manually, /compact context (saves hella $$$ + seems to improve focus)
Honestly I treat it like a junior programmer I'm pairing with. If you pay attention, you can catch it being stupid early and get it back on track. Best when you know exactly what you want, it's just boring work. It's really good with clear instructions, eg "Refactor X -> Y, using {design pattern}."
Sorry to give opposite anecdotes. It’s one of the things I find most irritating about AI right now.
It needs careful oversight, for if you're too generous with it, it'll happily add tons of code into your codebase that will make it horrendously difficult to understand and debug later. As capable as it is, I find it prudent to keep it on a short leash.
https://www.anthropic.com/engineering/claude-code-best-pract...