Show HN: Async – Claude Code and Linear and GitHub PRs in One Opinionated Tool

10 wjsekfghks 7 8/25/2025, 1:21:19 PM github.com ↗
Hi, I’m Mikkel and I’m building Async, an open-sourced developer tool that combines AI coding with task management and code review.

What Async does:

  - Automatically researches coding tasks, asks clarifying questions, then executes code changes in the cloud
  - Breaks work into reviewable subtasks with stack diffs for easier code review
  - Handles the full workflow from issue to merged PR without leaving the app
Demo here: https://youtu.be/98k42b8GF4s?si=Azf3FIWAbpsXxk3_

I’ve been working as a developer for over a decade now. I’ve tried all sorts of AI tools out there including Cline, Cursor, Claude Code, Kiro and more. All are pretty amazing for bootstrapping new projects. But most of my work is iterating on existing codebases where I can't break things, and that's where the magic breaks down. None of these tools work well on mature codebases.

The problems I kept running into:

  - I'm lazy. My Claude Code workflow became: throw a vague prompt like "turn issues into tasks in Github webhook," let it run something wrong, then iterate until I realize I could've just coded it myself. Claude Code's docs say to plan first, but it's not enforced and I can't force myself to do it.
  - Context switching hell. I started using Claude Code asynchronously - give it edit permissions, let it run, alt-tab to work on something else, then come back later to review. But when I return, I need to reconcile what the task was about, context switch back, and iterate. The mental overhead kills any productivity gains.
  - Tracking sucks. I use Apple Notes with bullet points to track tasks, but it's messy. Just like many other developers, I hate PM tools but need some way to stay organized without the bloat.
  - Review bottleneck. I've never shipped Claude Code output without fixes, at minimum stylistic changes (why does it always add comments even when I tell it not to?). The review/test cycle caps me at maybe 3 concurrent tasks.
So I built Async:

  - Forces upfront planning, always asks clarifying questions and requires confirmation before executing
  - Simple task tracking that imports Github issues automatically (other integration coming soon!)
  - Executes in the cloud, breaks work into subtasks, creates commits, opens PRs
  - Built-in code review with stacked diffs - comment and iterate without leaving the app
  - Works on desktop and mobile
It works by using a lightweight research agent to scope out tasks and come up with requirements and clarifying questions as needed (e.g., "fix the truncation issue" - "Would you like a tooltip on hover?"). After you confirm requirements, it executes the task by breaking it down into subtasks and then working commit by commit. It uses a mix of Gemini and Claude Code internally and runs all changes in the background in the cloud.

You've probably seen tools that do pieces of this, but I think it makes sense as one integrated workflow.

This isn't for vibe coders. I'm building a tool that I can use in my day-to-day work. Async is for experienced developers who know their codebases and products deeply. The goal is to make Async the last tool developers need to build something great. Still early and I'm iterating quickly. Would love to know what you think.

P.S. My cofounder loves light mode, I only use dark mode. I won the argument so our tool only supports dark mode. Thumbs up if you agree with me.

Comments (7)

JoshPurtell · 1h ago
Something I'd consider a game-changer would be making it really easy to kick off multiple claude instances to tackle a large researched task and then to view the results and collect them into a final research document.

IME no matter how well I prompt, a single claude/codex will never get a successful implementation of a significant feature single-shot. However, what does work is having 5 Claudes try it, reading the code and cherry picking the diff segments I like into one franken-spec I give to a final claude instance with essentially just "please implement something like this"

It's super manual nd annoying with git work-trees for me, but sounds like your setup could make it slick

wjsekfghks · 1h ago
Interesting. So, do you just start multiple instances of Claude Code and ask the same prompt on all of them? Manually cherry picking from 5 different worktrees sounds complicated. Will see what I can do :)
k__ · 49m ago
I hope it works better than GitHub Copilot Agent.
frankfrank13 · 1h ago
Looks cool, tbh I think i'd be more interested in just a lightweight local UI to track and monitor claude code, I could skip the linear and github piece.
wjsekfghks · 1h ago
Thanks for the feedback. Yeah, that is where we are heading as said in the demo video. We will follow up shortly to release local tool :)
mmargenot · 7h ago
I think this is a neat approach. When I interact with AI tooling, such as Claude Code, my general philosophy has been to maintain a strong opinion about what it is that I actually want to build. I usually have some system design done or some picture that I've drawn to make sure that I can keep it straight throughout a given session. Without that core conception of what needs to be done, it's a little too easy for an LLM to run off the rails.

This dialogue-based path is a cool way to interact with an existing codebase (and I'm a big proponent of writing and rewriting). At the very least you're made to actually think through the implications of what needs to be done and how it will play with the rest of the application.

How well do you find that this approach handles the long tail of little things that need to be corrected before finally merging? Does this approach solve the fiddly stylistic errors that need to be made on its own, or is it more that the UI / PR review approach that you've taken is more ergonomic for solving them?

wjsekfghks · 5h ago
hey! that's awesome to hear, thanks for the feedback.

we've tried a lot of things to make code more in-line with our paradigms (initially tried a few agents to parse out "project rules" from existing code, then used that in the system prompt), but have found that the agents tend to go off-track regardless. the highest leverage has just been changing the model (Claude writes code a certain way which we tend to prefer, vs GPT, etc) and a few strong system prompts (NEVER WRITE COMMENTS, repeated twice).

so the questions here are less about that, but more about overall functional / system requirements, and acknowledging that for stylistic things, the user will still have to review.