Ask HN: Do you struggle with flow state when using AI assisted coding tools?
50 rasca 47 8/6/2025, 1:08:01 PM
It's been extremely difficult for me to achieve a flow state while using tools like `claude code` because I have to wait after every interaction. I get easily distracted, my mind wonders and I find myself reading HN and browsing the internet.
I'm more productive in most of the tasks I need to do but in some of these detours I loose long periods of times without even noticing. I've tried keeping the console open and reading through the AI agent process but that gets me nervous after a few interactions.
I also don't enjoy it as much. I don't get the feeling of accomplishment after finishing a new feature and everything feels fragmented.
Even using multiple sessions doesn't do the trick for me because I need to change task context every time. Does this happen to anyone else? Any recommendations?
How do you think we can achieve flow state while in this transition period while AI coding still needs constant hand holding and reviews?
Bootstrapping things like a new project ? Yeah, things can be too fast for me to stay in the flow.
But my day to day is about working with already existing code which I have to modify and here, AI is exactly the opposite : it helps me drill down boring and legacy code. I helps me stay in the flow because I can ask questions about the code. I can ask it where is the code that does X, I can ask it how it currently works. I can describe my issue, let it analyze the code and ask it to make proposals to solve my issue. Then I discuss the options, then I let it implement the one "we" agreed upon, then I review and I discuss the solution.
In fact I stay in flow because I don't feel alone with my issues and my questions. Maybe it says more about my work environment where I cannot pair as much as I wanted to but at least I have Cursor/Claude/Whatever.
However not having that continuous interaction and hoping the AI to solve your Jira ticket in one prompt is going to be a disaster for your focus and you will not trust the result enough.
Overall, it makes my job less miserable when I'm doing boring things.
I think OP is saying it's too slow to stay in the flow. That's what I feel personally. The AI thinks a long time and I do something else.
AI is taking all the fun jobs, and we're left with the boring work!
My best "hack" for this has been to use Freedom[1] to create a blocklist of all my go-to time sucks (including, sadly, HN). This at least stops me from getting pulled in too deeply.
[1] http://freedom.to/
Most of us are doing relatively simple things like using with the largest/slowest models possible instead of downgrading to a smaller model. Reason: our tools lack the ability to escalate to more expensive models as needed and switching models manually is tedious. Ideally, a cheap locally running model would be first in line and respond quickly for quick things.
And then there's the whole asynchronous vs. synchronous thing. With Codex, it runs in a browser and it allows you to create a pull request whenever it is done. You can work on multiple pull requests with it even and it might work in parallel.
What's good about the Codex experience is that your input is only needed at the end. What's bad is that it takes ages. Even for simple stuff. Like a simple follow up question results in it boiling the oceans to startup the container again.
Slow AI is exactly like slow builds: frustrating and likely to distract you. If it's going to take a minute you are going to do something else. And that might not be work related. So it breaks your flow because you are sitting on your hands and filling your short term memory with garbage. Context switches are expensive (and break flow). Our brains don't do that well. And then you forget to switch back so you lose time.
I don't use reasoning models that much for this reason. It's easier and faster for me to manually patch up my code with whatever the LLM says I should fix. And on larger repositories the chance of a good PR drops sharply. So, even if it takes more context micromanagement to feed it all the detail it needs, this can be faster and more effective. And I get an answer in a few seconds instead of in a few minutes.
This is Claude Sonnet 4
Agent mode just produces dogshit that takes me longer to clean than if I wrote it myself to begin with. I don't use that.
Contrary to popular belief you won't be left behind if you don't use these "agents". If they ever get good enough that they don't need babysitting anymore and make you 10x productive, then you can simply adopt their use then. Any current skillset regarding prompting won't stay relevant.
Though I will caveat that I work on more niche subjects, not frontend, so my perception of their usefulness may be skewed.
I keep a notepad open with my ideas and thoughts as AI working and reading the code. Writing down my thoughts and use that for my next prompt.
Unrelated but I also noticed that using a lot of AI for coding often leads to over-engineered solutions. Instead of fixing the backend to return a proper audio format it suggests bringing FFMPEG to the browser and transcode for instance. It's important to be aware of this and keep asking AI if there are better ways to do things
Highly opinionated: Regarding using agents for a main project or feature, as some do, I don't think there is such a thing as a "flow state." If you adhere to that working strategy, your hypothesis is that the agent is so much faster than you that you can get more done even without a flow state, even if the quality is a little worse, and your job has fundamentally changed from programmer to code reviewer. That will have its own set of skills for you to develop, such as efficiently reading their changes, managing costs and context, writing good prompts, etc.
Feels wrong but it's the closest I got.
Then it costs me a lot of energy to get back working on the problem and fix all the mistakes that the AI made.
A hard bug used to be frustrating, but also a bit challenging.
Not anymore, really.
On the pleasure side, I can get close to the same feeling with a design and review cycle going while Claude checks in, but its not as fun. Using an inline code assistant for the “interesting” parts while Claude does all the boring scaffolding.
I think of it like any industrialization process - being a cabinet maker was meditative - overseeing ikea flatpak cnc is not.
https://www.tabulamag.com/p/too-fast-to-think-the-hidden-fat...
There's simply no "flow state" when you're not the one doing the work, not the one reasoning.
I have loved almost every minute of my 20+ years as a developer, and the biggest thing agentic coding has achieved is suck the fun out of it.
I am bored for the first time in my career.
When I have a project that has tons of forms that require complex validation, it's the little thrill of thinking of a way to do it that saves time and is reusable.
Maybe I'll create a js proxy object and a generic "change" event handler. In the "change" I'll assign a value to the proxy that's the name of the form component. Maybe this proxy object is smart enough to issue batches of JSONPatch mutations to the server, that handles validation biz logic etc... so I end up with this neat, elegant little solution to a tedious problem. I get a thrill of satisfaction when it all works great, and the joy of problem solving to create and debug it.
Exactly zero out of a million runs would an AI come up with something like that. It's a stochastic parrot.
But it'll happily chug along and generate 10,000 lines of repetitive forms code that mostly work, and done in the most tedious way imaginable.
But I guess it's more "productive" than me.
Each time I run in to a tedious or boring problem, I take joy in pushing the boundaries of my understanding to come up with a better way of solving it.
AI coding kills that completely.
I've had a few instances where I've been able to go into flow state with 3 concurrent sessions on fully disjoint projects. Feels like context switching is easier when there's almost zero cognitive overlap between the projects in terms of actual features, but the stacks are all similar.
But for sure I haven't even come closer to these glorious flow sessions fully locked into coding some clever bit of logic.
So, here for the suggestions too.
I love building things, trying things, finding the best result that checks all the boxes, conventions, quality, etc. Not having to hand set the code is a pure net positive for me. I can discuss architecture, interfaces, data flow, data models and so on, I can let it quickly onboard me, tell me where we left of. It's great.
If everything feels fragmented after completing a feature, then you haven't driven the agent properly. You really want to yell at it from multiple angles until the output fits a narrow hole that you specify.
This is a challenge to conquer, irrespective of whether or not you are using ai-assisted code editors or not.
But when vibe coding this happens constantly every small periods of time.
https://xkcd.com/303/
If you stop and review what an agent did, it breaks flow.
Personally, my experience has been the best with doing this:
- Start coding
- On a second laptop, run an agent in yolo and tell it to periodically pull changes and if there are any `AGENT-TODO:` items in the code, do them.
- As you code, if you find something irritating or boring, `AGENT-TODO: ...` it.
- Periodically push your changes.
- Periodically pull any changes your agent has pushed down.
- Keep working; don't stop and check on the agent. Don't confirm every action. Just yolo.
If that's too scary, have it put up PRs instead of pushing to the live branch. /shrug
...but, tldr: if you're sitting there watching an agent do things, you're not in flow. If you're kicking multiple agents off and sitting waiting for them, you're more productive, but that is absolutely not flow state.
Anyone who thinks they are, doesn't know what flow state is.
The key to maintaining flow is having a second clone, or a second machine or something where you can keep doing work after you kick the agent off to do something.
(yeah, you don't need a second laptop, but it's nice; agents will often run things to check they work or steal ports or run tests that can screw with you if you're on the same machine)