6 Weeks of Claude Code

87 mpweiher 38 8/2/2025, 12:20:59 PM blog.puzzmo.com ↗

Comments (38)

Fraterkes · 1h ago
Irrespective of how good Claude code actually is (I haven’t used it, but I think this article makes a really cogent case), here’s something that bothers me: I’m very junior, I have a big slow ugly codebase of gdscript (basically python) that I’m going to convert to C# to both clean it up and speed it up.

This is for a personal project, I haven’t written a ton of C# or done this amount of refactoring before, so this could be educational in multiple ways.

If I were to use Claude for this Id feel like I was robbing myself of something that could teach me a lot (and maybe motivate me to start out with structuring my code better in the future). If I don’t use Claude I feel like Im wasting my (very sparse) free time on a pretty uninspiring task that may very well be automated away in most future jobs, mostly out of some (misplaced? Masochistic?) belief about programming craft.

This sort of back and forth happens a lot in my head now with projects.

stavros · 4m ago
In my experience, if you don't review the generated code, and thus become proficient in C# enough to do that, the codebase will become trash very quickly.

Errors compound with LLM coding, and, unless you correct them, you end up with a codebase too brittle to actually be worth anything.

Friends of mine apparently don't have that problem, and they say they have the LLM write enough tests that they catch the brittleness early on, but I haven't tried that approach. Unfortunately, my code tends to not be very algorithmic, so it's hard to test.

jona777than · 12m ago
After 16 years of coding professionally, I can say Claude Code has made me considerably better at the things that I had to bang my head against the wall to learn. For things I need to learn that are novel to me, for productivity sake, it’s been “easy come; easy go” like any other learning experience.

My two cents are:

If your goal is learning fully, I would prioritize the slow & patient route (no matter how fast “things” are moving.)

If your goal is to learn quickly, Claude Code and other AI tooling can be helpful in that regard. I have found using “ask” modes more than “agent” modes (where available) can go a long way with that. I like to generate analogies, scenarios, and mnemonic devices to help grasp new concepts.

If you’re just interested in getting stuff done, get good at writing specs and letting the agents run with it, ensuring to add many tests along the way, of course.

I perceive there’s at least some value in all approaches, as long as we are building stuff.

yoyohello13 · 25m ago
A few years ago there was a blog post trend going around about “write you’re own x” instead of using a library or something. You learn a lot about how software by writing your own version of a thing. Want to learn how client side routing works? Write a client side router. I think LLMs have basically made it so anything can be “library” code. So really it comes down to what you want to get out of the project. Do you want to get better at C#? Then you should probably do the port yourself. If you just want to have the ported code and focus on some other aspect, then have Claude do it for you.

Really if your goal is to learn something, then no matter what you do there has to be some kind of struggle. I’ve noticed whenever something feels easy, I’m usually not really learning much.

michaelcampbell · 21m ago
I'm on the tail end of my 35+ year developer career, but one thing I always do with any LLM stuff is this: I'll ask it to solve something generally I know I COULD solve, I just don't feel like it.

Example: Yesterday I was working with an Open API 3.0 schema. I know I could "fix" the schema to conform to a sample input, I just didn't feel like it because it's dull, I've done it before, and I'd learn nothing. So I asked Claude to do it, and it was fine. Then the "Example" section no longer matched the schema, so Claude wrote me a fitting example.

But the key here is I would have learned nothing by doing this.

There are, however, times where I WOULD have learned something. So whenever I find the LLM has shown me something new, I put that knowledge in my "knowledge bank". I use the Anki SRS flashcard app for that, but there are other ways, like adding to your "TIL blog" (which I also do), or taking that new thing and writing it out from scratch, without looking at the solution, a few times and compiling/running it. Then trying to come up with ways this knowledge can be used in different ways; changing the requirements and writing that.

Basically getting my brain to interact with this new thing in at least 2 ways so it can synthesize with other things in your brain. This is important.

Learning a new (spoken) language uses this a lot. Learn a new word? Put it in 3 different sentences. Learn a new phrase? Create at least 2-3 new phrases based on that.

I'm hoping this will keep my grey matter exercised enough to keep going.

adamcharnock · 1h ago
I think this is a really interesting point. I have a few thoughts as a read it (as a bit of a grey-beard).

Things are moving fast at the moment, but I think it feels even faster because of how slowly things have been moving for the last decade. I was getting into web development in the mid-to-late-90s, and I think the landscape felt similar then. Plugged-in people kinda knew the web was going to be huge, but on some level we also know that things were going to change fast. Whatever we learnt would soon fall by the wayside and become compost for the next new thing we had to learn.

It certainly feels to me like things have really been much more stable for the last 10-15 years (YMMV).

So I guess what I'm saying is: yeah, this is actually kinda getting back to normal. At least that is how I see it, if I'm in an excitable optimistic mood.

I'd say pick something and do it. It may become brain-compost, but I think a good deep layer of compost is what will turn you into a senior developer. Hopefully that metaphor isn't too stretched!

MrDarcy · 57m ago
I’ve also felt what GP expresses earlier this year. I am a grey-beard now. When I was starting my career in the early 2000’s a grey-beard told me, “The tech is entirely replaced every 10 years.” This was accompanied by an admonition to evolve or die in each cycle.

This has largely been true outside of some outlier fundamentals, like TCP.

I have tried Claude code extensively and I feel it’s largely the same. To GP’s point, my suggestion would be to dive into the project using Claude Code and also work to learn how to structure the code better. Do both. Don’t do nothing.

CuriouslyC · 36m ago
How much do you care about experience with C# and porting software? If that's an area you're interested in pursuing maybe do it by hand I guess. Otherwise I'd just use claude.
jghn · 24m ago
Disagree entirely, and would suggest the parent intentionally dive in on things like this.

The best way to skill up over the course of one's career is to expose yourself to as broad an array of languages, techniques, paradigms, concepts, etc. So sure, you may never touch C# again. But by spending time to dig in a bit you'll pick up some new ideas that you can bring forward with you to other things you *do* care about later.

jvanderbot · 1h ago
Well I think you've identified a task that should be yours. If the writing of the code itself is going to help you, then don't let AI take that help from you because of a vague need for "productivity". We all need to take time to make ourselves better at our craft, and at some point AI can't do that for you.

But I do think it could help, for example by showing you a better pattern or language or library feature after you get stuck or finish a first draft. That's not cheating that's asking a friend.

thatfrenchguy · 35m ago
Doing the easy stuff is what gives you the skills to do the harder stuff that a LLM can’t do, which arguably makes this hard indeed
infecto · 59m ago
What’s wrong with using a Claude code to write a possible initial iteration and then go back and review the code for understanding? Various languages and frameworks have there own footguns but those usually are not unfixable later on.
mentos · 1h ago
Cursor has made writing C++ like a scripting language for me. I no longer wrestle with arcane error messages, they go straight into Cursor and I ask it to resolve and then from its solution I learn what my error was.
baq · 1h ago
As someone who is programming computers for almost 30 years and professionally for about 20 by all means do some of it manually, but leverage LLMs in tutor/coach mode, with „explain this but don’t solve it for me” prompts when stuck. Let the tool convert the boring parts once you’re confident they’re truly boring.

Programming takes experience to acquire taste for what’s right, what’s not, and what smells bad and will bite you but you can temporarily (yeah) not care. If you let the tool do everything for you you won’t ever acquire that skill, and it’s critical to judge and review your work and work of others, including LLM slop.

I agree it’s hard and I feel lucky for never having to make the LLM vs manual labor choice. Nowadays it’s yet another step in learning the craft, but the timing is wrong for juniors - you are now expected to do senior level work (code reviews) from day 1. Tough!

jansan · 1h ago
It depends how you use it. You can ask Claude Code for instructions to migrate the Code yourself, and it will be a teacher. Or you can ask it to create a migration plan and the execute it, in which case learning will of course be very limited. I recommend to do the conversion in smaller steps if possible. We tried to migrate a project just for fun in one single step and Claude Code failed miserably (itself thought it had done a terrific job), but doing it in smaller chunks worked out quite well.
gjfkririfif · 1h ago
Hii
jeswin · 58m ago
Claude Code is ahead of anything else, in a very noticeable way. (I've been writing my own cli tooling for AI codegen from 2023 - and in that journey I've tried most of the options out there. It has been a big part of my work - so that's how I know.)

I agree with many things that the author is doing:

1. Monorepos can save time

2. Start with a good spec. Spend enough time on the spec. You can get AI to write most of the spec for you, if you provide a good outline.

3. Make sure you have tests from the beginning. This is the most important part. Tests (along with good specs) are how an AI agent can recurse into a good solution. TDD is back.

4. Types help (a lot!). Linters help as well. These are guard rails.

5. Put external documentation inside project docs, for example in docs/external-deps.

6. And finally, like every tool it takes time to figure out a technique that works best for you. It's arguably easier than it was (especially with Claude Code), but there's still stuff to learn. Everyone I know has a slightly different workflow - so it's a bit like coding.

I vibe coded quite a lot this week. Among them, Permiso [1] - a super simple GraphQL RBAC server. It's nowhere close to best tested and reviewed, but can be quite useful already if you want something simple (and can wait until it's reviewed.)

[1]: https://github.com/codespin-ai/permiso

nico · 16m ago
Agreed, for CC to work well, it needs quite a bit of structure

I’ve been working on a Django project with good tests, types and documentation. CC mostly does great, even if it needs guidance from time to time

Recently also started a side project to try to run CC offline with local models. Got a decent first version running with the help of ChatGPT, then decided to switch to CC. CC has been constantly trying to avoid solving the most important issues, sidestepping errors and for almost everything just creating a new file/script with a different approach (instead of fixing or refactoring the current code)

unshavedyak · 49m ago
> 2. Start with a good spec. Spend enough time on the spec. You can get AI to write most of the spec for you, if you provide a good outline.

Curious how you outline the spec, concretely. A sister markdown document? How detailed is it? etc.

> 3. Make sure you have tests from the beginning. This is the most important part. Tests (along with good specs) are how an AI agent can recurse into a good solution. TDD is back.

Ironically i've been struggling with this. For best results i've found claude to do best with a test hook, but then claude loses the ability to write tests before code works to validate bugs/assumptions, it just starts auto fixing things and can get a bit wonky.

It helps immensely to ensure it doesn't forget anything or abandon anything, but it's equally harmful at certain design/prototype stages. I've taken to having a flag where i can enable/disable the test behavior lol.

jeswin · 38m ago
> Curious how you outline the spec, concretely. A sister markdown document? How detailed is it? etc.

Yes. I write the outline in markdown. And then get AI to flesh it out. The I generate a project structure, with stubbed API signatures. Then I keep refining until I've achieved a good level of detail - including full API signatures and database schemas.

> Ironically i've been struggling with this. For best results i've found claude to do best with a test hook, but then claude loses the ability to write tests before code works to validate bugs/assumptions, it just starts auto fixing things and can get a bit wonky.

I generate a somewhat basic prototype first. At which point I have a good spec, and a good project structure, API and db schemas. Then continuously refine the tests and code. Like I was saying, types and linting are also very helpful.

qaq · 53m ago
Another really nice use case building very sophisticated test tooling. Normally a company might not allocate enough resources to a task like that but with Claude Code it's a no brainer. Also can create very sophisticated mocks like say db mock that can parse all queries in the codebase and apply them to in memory fake tables. Would be total pain to build and maintain by hand but with claude code takes literally minutes.
airstrike · 48m ago
In my experience LLMs are notoriously bad at tests, so this is, to me, one of the worst use cases possible.
qaq · 22m ago
In my experience they are great for test tooling. For actual tests after I have covered a number of cases it's very workable to tell it to identify gaps and edge cases and propose tests than I'd say I accept about 70% of it suggestions.
delduca · 1h ago
My opinion on Claude as ChatGPT user.

It feels like ChatGPT on cocaine, I mean, I asked for a small change and it came with 5 solutions changing all my codebase.

stavros · 2m ago
Was it Sonnet or Opus? I've found that Sonnet will just change a few small things, Opus will go and do big bang changes.

YMMV, though, maybe it's the way I was prompting it. Try using Plan Mode and having it only make small changes.

crop_rotation · 4m ago
Is this opinion on claude code or claude the model?
iamsaitam · 47m ago
I'm NOT saying it is, but without regulatory agencies having a look or it being open source, this might be well working as intended, since Anthropic makes more money out of it.
slackpad · 1h ago
Really agree with the author's thoughts on maintenance here. I've run into a ton of cases where I would have written a TODO or made a ticket to capture some refactoring and instead just knocked it out right then with Claude. I've also used Claude to quickly try out a refactoring idea and then abandoned it because I didn't like how it came out. It really lowers the activation energy for these kinds of maintenance things.

Letting Claude rest was a great point in the article, too. I easily get manifold value compared to what I pay, so I haven't got it grinding on its own on a bunch of things in parallel and offline. I think it could quickly be an accelerator for burnout and cruft if you aren't careful, so I keep to a supervised-by-human mode.

Wrote up some more thoughts a few weeks ago at https://www.modulecollective.com/posts/agent-assisted-coding....

qaq · 40m ago
For me real limit is the amount of code I can read and lucidly understand to spot issues in a given day.
iwontberude · 1h ago
I stopped writing as much code because of RSI and carpal tunnel but Claude has given me a way to program without pain (perhaps an order of magnitude less pain). As much as I was wanting to reject it, I literally am going to need it to continue my career.
iaw · 1h ago
Now that you point this out, since I started using Claude my RSI pain is virtually non-existent. There is so much boilerplate and repetitive work taken out when Claude can hit 90% of the mark.

Especially with very precise language. I've heard of people using speech to text to use it which opens up all sorts of accessibility windows.

flappyeagle · 1h ago
Are you using dictation for text entry
iwontberude · 55m ago
Great suggestion! I will be now :)
cooperaustinj · 25m ago
Superwhisper is great. It's closed source, however. There may be other comparable open spurce options available now. I'd suggest trying superwhisper, so you know what's possible and maybe compare to open source options after. Superwhisper runs locally and has a one time purchase option, which makes it acceptable to me.
MuffinFlavored · 1h ago
I think Claude Code is great, but I really grew accustomed to the "Cursor-tab tab tab" autocomplete style. A little perplexed why the Claude Code integration into VS Code doesn't add something like this? It would make it the perfect product to me. Surprised more people do not talk about this/it isn't a more commonly requested feature.
infecto · 56m ago
Agree. I used Claude code a bit and enjoyed it but also felt like I was too disconnected to the changes, I guess too much vibe coding?

Cursor is a nice balance for me still. I am automating a lot of the writing but it’s still bite size pieces that feel easier to review.

jansan · 1h ago
A lot of things that the author achieved with Claude Code is migrating or refactoring of code. To me, who started using Claude Code just two weeks ago, this seems to be one of the real strengths at the moment. We have a large business app that uses an abandoned component library and contains a lot of cruft. Migrating to another component library seemed next to impossible, but with Claude Code the whole process took me just about one week. It is making mistakes (non-matching tags for example), but with some human oversight we reached the first goal. Next goal is removing as much cruft as possible, so working on the app becomes possible or even fun again.

I remember when JetBrains made programming so much easier with their refactoring tools in IntelliJ IDEA. To me (with very limited AI experience) this seems to be a similar step, but bigger.

zkry · 1h ago
On the other hand though, automated refactoring like in IntelliJ can scale practically infinitely, are extremely low cost, and are gauranteed to never make any mistakes.

Not saying this is more useful per se, just saying that different approaches have their pros and cons.