Ask HN: Is AI 'context switching' exhausting?
6 interstice 8 6/19/2025, 12:44:51 AM
I've always had this distinct struggle when switching in and out of being 'in charge', the best example I can think of is the difference between a driver vs a passengers awareness of the road.
Using AI for code has reminded me of this sensation, switching in and out of 'driving' feels more exhausting than being 100% one or the other. I have a theory that enforcing reduced engagement has all sorts of side effects in any format.
Wondering if anyone else has run into this feeling, and if so have you tried anything successfully to address it?
I believe the issue isn't just "context switching" in the traditional OS sense. It's "Cognitive Mode Switching" – the mental gear-shifting between being a creator and a delegator. That's the draining part.
My theory is that this exhaustion stems from a fundamental design flaw in how most AI tools are currently implemented. They are designed as "wizards" or separate "destinations." You have to consciously:
Stop "driving" your primary task (coding, writing, designing). Get out of your car, so to speak. Go to the AI tool and ask for directions. Get back in the car and try to re-engage as the driver, holding the new directions in your head. This constant mode-switching breaks the state of flow. The passenger (the AI) isn't looking at the same road you are; you have to explain the road to them every single time.
At my startup, Markhub, we are obsessed with solving this exact problem. Our core principle is that AI shouldn't be a passenger you delegate to, but a co-pilot integrated into your cockpit.
Our approach is to design AI (we call ours MAKi) as an ambient, context-aware layer within the primary workspace. The goal is to eliminate the 'switch' altogether. For example, our AI listens to team conversations and proactively suggests turning a message into a task, right there, inline. You never stop driving; your "car" just gets smarter and surfaces the right controls at the right time.
So, to answer your question: Yes, we've felt this deeply. And our solution is to stop thinking of AI as a tool to switch to, and start designing it as an integrated system that removes the need for the switch in the first place. Keep the user 100% in the driver's seat, just in a much, much better vehicle.
“Using AI for code has reminded me of this sensation… switching in and out of “driving” feels more exhausting than being 100 % one or the other.” You’re not imagining it. There’s a fair bit of cognitive-science literature on task-set inertia: every time you hand work off (human→AI or AI→human) you pay ~100–150 ms to reconstruct the mental model, plus an exponentially-longer “resumption lag” if the state is ambiguous.¹ Do that dozens of times per hour and you’ve effectively added a stealth meeting to your day. A few things that helped me when pairing with an LLM: • Chunk bigger. Treat the AI like a junior dev on 30-minute sprint tickets, not a rubber duck you ping every two lines. • Use “state headers.” I prepend a tiny recap in comments — // you own: parse(), I own: validate() — so I can scan and re-hydrate context instantly. • Declare no-AI zones. Sounds counter-intuitive, but reserving, say, test-writing for uninterrupted solo focus keeps me in flow longer overall. …have you tried anything successfully to address it? We were annoyed enough to build something. At Recontext (YC S24) we sit between your editor and whatever LLM you’re using; every AI request is automatically tagged with the diff, dependency graph, and TODO items so when you jump back in, you get a one-glance briefing instead of spelunking through scrollback. Early users report ~40 % fewer context switches during a coding session. If anyone wants to kick the tires, we’re handing out private beta invites — email is in profile. ⸻ ¹ See Monsell, “Task switching,” Trends in Cognitive Sciences 2003 — the “switch cost” math is sobering.
I can't stand the more complex "agents" like Junie that will go off on a chain of thought and give an update every 30 seconds or so and then 10 minutes later I get something that's occasionally useful but often somewhere between horribly wrong and not even wrong.
Definitely felt some burnout or dumbness after it, trying to get back into thinking for myself and actually writing code.
I think it's like gambling, you're sort of chasing an ideal result that feels close but never happens. That's where the exhaustion comes from imo, much more than if you were switching from manager to IC which I don't find tiring. I think its more a dopamine withdrawal than context switching.
At the moment I have a bit of a tick tock where I'll vibe code to a point, get frustrated when it gets stuck on something I can fix myself in a minute or two. Then switch off using AI entirely for a while until I get bored of boilerplate and repeat the cycle.