Show HN: Project management system for Claude Code
The problem was that context kept disappearing between tasks. With multiple Claude agents running in parallel, I’d lose track of specs, dependencies, and history. External PM tools didn’t help because syncing them with repos always created friction.
The solution was to treat GitHub Issues as the database. The "system" is ~50 bash scripts and markdown configs that:
- Brainstorm with you to create a markdown PRD, spins up an epic, and decomposes it into tasks and syncs them with GitHub issues - Track progress across parallel streams - Keep everything traceable back to the original spec - Run fast from the CLI (commands finish in seconds)
We’ve been using it internally for a few months and it’s cut our shipping time roughly in half. Repo: https://github.com/automazeio/ccpm
It’s still early and rough around the edges, but has worked well for us. I’d love feedback from others experimenting with GitHub-centric project management or AI-driven workflows.
How are people using auto-edits and these kind of higher-level abstraction?
When using agents like this, you only see a speedup because you’re offloading the time you’d spend thinking / understanding the code. If you can review code faster than you can write it, you’re cutting corners on your code reviews. Which is normally fine with humans (this is why we pay them), but not AI. Most people just code review for nitpicks anyways (rename a variable, add some white space, use map reduce instead of for each) instead of taking time to understand the change (you’ll be looking a lots of code and docs that aren’t present in the diff).
That is, unless you type really slowly - which I’ve recently discovered is actually a bottle neck for some professionals (slow typing, syntax issues, constantly checking docs, etc). I’ll add I experience this too when learning a new language and AI is immensely helpful.
I keep wondering why. All projects I ever saw need lines of code, nuts and bolts removed instead of added. My best libraries consist of a couple of thousand lines.
Of course, there are many many other kinds of development - when developing novel low-level systems for complicated requirements, you're going to get much poorer results from an LLM, because the project won't as neatly fit in to one of the "templates" that it has memorized, and the LLM's reasoning capabilities are not yet sophisticated enough to handle arbitrary novelty.
You can. People do. It's not perfect at it yet, but there are success stories of this.
I mean, the parent even pointed out that it works for vibe coding and stuff you don't care about; ...but the 'You can't' refers to this question by the OP:
> I really need to approve every single edit and keep an eye on it at ALL TIMES, otherwise it goes haywire very very fast! How are people using auto-edits and these kind of higher-level abstraction?
No one I've spoken to is just sitting back writing tickets while agents do all the work. If it was that easy to be that successful, everyone would be doing it. Everyone would be talking about it.
To be absolutely clear, I'm not saying that you can't use agents to modify existing code. You can. I do; lots of people do. ...but that's using it like you see in all the demos and videos; at a code level, in an editor, while editing and working on the code yourself.
I'm specifically addressing the OPs question:
Can you use unsupervised agents, where you don't interact at a 'code' level, only at a high level abstraction level?
...and, I don't think you can. I don't believe anyone is doing this. I don't believe I've seen any real stories of people doing this successfully.
It really depends on the area though. Some areas are simple for LLMs, others are quite difficult even if objectively simple.
Granted atm i'm not a big believer in vibe coding in general, but imo it requires quite a bit of knowledge to be hands off and not have it fall into wells of confusion.
There is no magic way. It boils down to less strict inspection.
I try to maintain an overall direction and try to care less about the individual line of code.
Essentially, I'm treating Claude Code as a very fast junior developer who needs to be spoon-fed with the architecture.
"We follow a strict 5-phase discipline" - So we're doing waterfall again? Does this seem appealing to anyone? The problem is you always get the requirements and spec wrong, and then AI slavishly delivers something that meets spec but doesn't meet the need.
What happens when you get to the end of your process and you are unhappy with the result? Do you throw it out and rewrite the requirements and start from scratch? Do you try to edit the requirements spec and implementation in a coordinated way? Do you throw out the spec and just vibe code? Do you just accept the bad output and try to build a new fix with a new set of requirements on top of it?
(Also the llm authored readme is hard to read for me. Everything is a bullet point or emoji and it is not structured in a way that makes it clear what it is. I didn't even know what a PRD meant until halfway through)
I was impressed that someone took it up to this level till I saw the tell tale signs of the AI generated content in the README. Now I have no faith that this is a system that was developed, iterated and tested to actually work and not just a prompt to an AI to dress up a more down to earth workflow like mine.
Evidence of results improvement using this system is needed.
Kidding aside, of course we used AI to build this tool and get it ready for the "public". This includes the README.
I will post a video here and on the repository over the weekend with an end-to-end tutorial on how the system works.
Test runner sub agent knows exactly how to run tests, summarize failures etc. It loads up all the context specific to running tests and frees the main agent's context from all that. And so on...
I talked to and extremely strong engineer yesterday who is basically doing exactly this.
Would love to see a video/graphic of this in action.
With that being said, a video will be coming very soon.
Hopefully, your GitHub tickets are large enough, such as covering one vertical scope, one cross-cutting function, or some reactive work such as bug fixing or troubleshooting.
The reason is that coding agents are good at decomposing work into small tasks/TODO lists. IMO, too many tickets on GitHub will interfere with this.
When we break down an epics into tasks, we get CC to analyze what can be run in parallel and use each issue as a conceptual grouping of smaller tasks, so multiple agents can work on the same issue in parallel.
The issues are relatively large, and depending on the feature, every epic has between 5 to 15 issues. When it's time to work on the issue, your local cloud code will break it down into minute tasks to carry out sequentially.
- Brainstorm a PRD via guided prompts (prds/[name].md).
- Transform PRD into epics (epics/[epic-name]/epic.md).
- Decompose epic into tasks (epics/[epic-name]/[feature-name]/[task].md).
- Sync: push epics & tasks to GitHub Issues.
- Execute: Analyze which tasks can be run in parallel (different files, etc). Launch specialized agents per issue.
Are ppl really doing this? My brain gets overwhelmed if i have more than 2 or 3.