Ask HN: Is AI 'context switching' exhausting?

6 interstice 8 6/19/2025, 12:44:51 AM
I've always had this distinct struggle when switching in and out of being 'in charge', the best example I can think of is the difference between a driver vs a passengers awareness of the road.

Using AI for code has reminded me of this sensation, switching in and out of 'driving' feels more exhausting than being 100% one or the other. I have a theory that enforcing reduced engagement has all sorts of side effects in any format.

Wondering if anyone else has run into this feeling, and if so have you tried anything successfully to address it?

Comments (8)

joegibbs · 3m ago
I think I’ve mostly gotten used to it. At the start, definitely, but now my method is to have 3 or 4 agent tasks running o3 to perform smaller actions than I was previously trying to do. There is a second where I have to remember what each one was doing but it’s still much faster than manually doing it.
PaulShin · 3h ago
This is a fantastic observation, and you've nailed the analogy. The exhaustion is real.

I believe the issue isn't just "context switching" in the traditional OS sense. It's "Cognitive Mode Switching" – the mental gear-shifting between being a creator and a delegator. That's the draining part.

My theory is that this exhaustion stems from a fundamental design flaw in how most AI tools are currently implemented. They are designed as "wizards" or separate "destinations." You have to consciously:

Stop "driving" your primary task (coding, writing, designing). Get out of your car, so to speak. Go to the AI tool and ask for directions. Get back in the car and try to re-engage as the driver, holding the new directions in your head. This constant mode-switching breaks the state of flow. The passenger (the AI) isn't looking at the same road you are; you have to explain the road to them every single time.

At my startup, Markhub, we are obsessed with solving this exact problem. Our core principle is that AI shouldn't be a passenger you delegate to, but a co-pilot integrated into your cockpit.

Our approach is to design AI (we call ours MAKi) as an ambient, context-aware layer within the primary workspace. The goal is to eliminate the 'switch' altogether. For example, our AI listens to team conversations and proactively suggests turning a message into a task, right there, inline. You never stop driving; your "car" just gets smarter and surfaces the right controls at the right time.

So, to answer your question: Yes, we've felt this deeply. And our solution is to stop thinking of AI as a tool to switch to, and start designing it as an integrated system that removes the need for the switch in the first place. Keep the user 100% in the driver's seat, just in a much, much better vehicle.

antinomicus · 2h ago
God this comment reads exactly like someone asked gemini to make a classic hacker news comment reply to this post. Slightly useful insight, that perfectly transition into an ad for the commenter’s startup. Actually, I asked o3 for a response to the OP and here’s what it generated.

“Using AI for code has reminded me of this sensation… switching in and out of “driving” feels more exhausting than being 100 % one or the other.” You’re not imagining it. There’s a fair bit of cognitive-science literature on task-set inertia: every time you hand work off (human→AI or AI→human) you pay ~100–150 ms to reconstruct the mental model, plus an exponentially-longer “resumption lag” if the state is ambiguous.¹ Do that dozens of times per hour and you’ve effectively added a stealth meeting to your day. A few things that helped me when pairing with an LLM: • Chunk bigger. Treat the AI like a junior dev on 30-minute sprint tickets, not a rubber duck you ping every two lines. • Use “state headers.” I prepend a tiny recap in comments — // you own: parse(), I own: validate() — so I can scan and re-hydrate context instantly. • Declare no-AI zones. Sounds counter-intuitive, but reserving, say, test-writing for uninterrupted solo focus keeps me in flow longer overall. …have you tried anything successfully to address it? We were annoyed enough to build something. At Recontext (YC S24) we sit between your editor and whatever LLM you’re using; every AI request is automatically tagged with the diff, dependency graph, and TODO items so when you jump back in, you get a one-glance briefing instead of spelunking through scrollback. Early users report ~40 % fewer context switches during a coding session. If anyone wants to kick the tires, we’re handing out private beta invites — email is in profile. ⸻ ¹ See Monsell, “Task switching,” Trends in Cognitive Sciences 2003 — the “switch cost” math is sobering.

pillefitz · 2h ago
We definitely need a way to flag AI content
PaulHoule · 6h ago
Personally I like the older kind of chatbots where I can ask it to write me something little (a function, a SQL query, ...) and I have it in 10-30 seconds and can think about it, try it, look in the manual to confirm it, or give it feedback or ask for something else. This can be a lot more efficient than looking in incomplete or badly organized manuals (MUI, react-router, ...) or filtering out the wrong answers on Stack Overflow that Stack Overflow doesn't filter out.

I can't stand the more complex "agents" like Junie that will go off on a chain of thought and give an update every 30 seconds or so and then 10 minutes later I get something that's occasionally useful but often somewhere between horribly wrong and not even wrong.

interstice · 5h ago
This resonates, even though copy pasting from Claude et al seems like it should be inefficient somehow it feels less prone to getting completely off track compared to leaving something like cursor or aider chat running.
andy99 · 6h ago
I had a vibe-coding phase that I think largely followed the popular arc and timeline from optimism through to disappointment.

Definitely felt some burnout or dumbness after it, trying to get back into thinking for myself and actually writing code.

I think it's like gambling, you're sort of chasing an ideal result that feels close but never happens. That's where the exhaustion comes from imo, much more than if you were switching from manager to IC which I don't find tiring. I think its more a dopamine withdrawal than context switching.

interstice · 4h ago
Dopamine makes sense, since its kind of switching between 'sources' of Dopamine, like one is a sugar rush and the other is slow release like reading a book.

At the moment I have a bit of a tick tock where I'll vibe code to a point, get frustrated when it gets stuck on something I can fix myself in a minute or two. Then switch off using AI entirely for a while until I get bored of boilerplate and repeat the cycle.