Ask HN: What is your ultimate AI-assisted coding setup?
6 nico 13 6/12/2025, 7:50:35 PM
What is the best setup for AI-assisted coding, including: IDE (Cursor, VSCode, Windsurf, Zed, etc). AI assistant (LLM) like Claude, Gemini 2.5 Pro, OpenAI, DeepSeek. MCP, rulesets, extensions, tools, workflow, and anything else?
And what kind of project are you building/maintaining with it (framework, language, deployment infra, aprox # of users)?
It seems like there are a lot of options out there, and a lot look very similar, maybe just different styles
Bolt will do app-wide changes. Like recently I'm working on a character generator. I want it to include personality traits after gender but before clothing. It adds the screen, increments the steps, and changes the final screen to reflect this new data type.
Cursor does the engineering very well. I think it's become common recently to have a shared markdown file that different agents can access.
Gemini Pro has the huge context window but tends to make edits you didn't ask for, like simplifying your instructions and removing some of the important ones. Claude sonnet 4 is just nice for most tasks. o3 is the most precise by far, but low creativity. Visually GPT-4.1 seems to be the best. So if you have a Figma design, you can just screenshot and paste it.
Do have a cheat sheet ready for the AI. Tell it how to open modals, where to get the fonts, colors, icons.
I know some people are able to make it fully automated, but that is something I don't trust it to do. It would be an interesting exercise to see how well we can write a PRD then, and it becomes an exercise in commanding genies.
Since then I've noticed this triple working really well:
1. Ask mode Cursor for my main track of development
2. o3 for learning about or checking up on approaches when more rigor is needed
3. Delegating parallel tasks to claude code on a separate machine
* small work I can describe in <3 sentences
* parallel debugging / explaining running at the same time I'm trying to figure something out to cross check work or preemptively give me some new ideas on debugging
* starting on bigger work that I'll mostly need to throw away and do myself but I'm going to procrastinate otherwise
Putting delegations somewhere else is key because I don't want my local environment to be polluted with a separate task.
What kind of code base are you using this setup on?
Code base is weird so maybe my situation is too unique to generalize. Overall its systems engineering across 10s of OSS repos in 3-4 languages and a decade of abstractions operating a biggish open system at scale. A strange mix of work doing both in depth slow critical maintenance of the system and fast shoddy prototyping on top of it.
2. Submit the fully-defined requirements .md file to the LLM by drag-and-drop. If there are starting files to be modified, drag and drop those files as well (as a zip if hierarchical, or as unpackaged files if a flat structure is suitable).
3. Wait for LLM to be done. This can take as long as it needs to, since this process supports specification files with requirements of arbitrary complexity. Receive the correct, full output fileset as a downloadable file from the LLM. (If I asked for instructions on how to use the files that the LLM provides, that is included in the output fileset, too.)
Oh wait, this does not exist. Sorry! But it is what I want, so it is still my "ultimate AI-assisted coding setup".
Combine it with Cursor and you don't have to drag and drop or download anything.
I just set up an architecture.md for it to navigate. With Cursor and something like Gemini Pro, you can even make the AI do the work and edit the docs once done. I even have a roadmap.md so I can remember where I left off, which helps a lot when you return to the project in a month.
In my experience, there are a lot of details that always emerge during development, that require an person’s involvement. Also agents tend to choke on long instructions, they are usually better at short instructions with very well defined context
Please, if anyone can name or link just one such tool that actually satisfies what I described, I would be very appreciative.
I think the closest I've seen is Aider or even ChatGPT, but I don't think they actually meet these requirements.
As far as I have seen, to correctly make anything above some moderate threshold of complexity has the LLMs or even LLM-system requiring a human in the loop.
I am willing to put in the work. Please nudge me toward what to explore more, if you really think what I wrote exists as described. I am very eager to explore more. I know to do it. Point me there. Thank you, really.
We're not there. "AI" is not Intelligent. It's "Artificial" generated content based on an out-of-date snapshot of a specific scope which can be useful but not intelligent.
> anything else?
anonymous, local-only please. ie. Llama-3.2-3B-Instruct.Q6_K.llamafile or whatever the latest version is.
> And what kind of project are you building/maintaining with it (framework, language, deployment infra, aprox # of users)?
I use it as not much more than a "Use It At Your Own Risk" stackoverflow / google search. It is often wrong, but it is "good enough." I treat any generated code as "Sample Code" which requires review and is merely a sample.