Show HN: Haystack – Review pull requests like you wrote them yourself

35 akshaysg 13 9/10/2025, 6:21:12 PM haystackeditor.com ↗
Hi HN!

We’re Akshay and Jake. We put together a tool called Haystack to make pull requests straightforward to read.

What Haystack does:

-- Builds a clear narrative. Changes in Haystack aren’t just arranged as unordered diffs. Instead, they unfold in a logical order, each paired with an explanation in plain, precise language

-- Focuses attention where it counts. Routine plumbing and refactors are put into skimmable sections so you can spend your time on design and correctness

-- Provides full cross-file context. Every new or changed function/variable is traced across the codebase, showing how it’s used beyond the immediate diff

Here’s a quick demo: https://youtu.be/w5Lq5wBUS-I

If you’d like to give it a spin, head over to haystackeditor.com/review! We set up some demo PRs that you should be able to understand and review even if you’ve never seen the repos before!

We used to work at big companies, where reviewing non-trivial pull requests felt like reading a book with its pages out of order. We would jump and scroll between files, trying to piece together the author’s intent before we could even start reviewing. And, as authors, we would spend time to restructure our own commits just to make them readable. AI has made this even trickier. Today it’s not uncommon for a pull request to contain code the author doesn’t fully understand themselves!

So, we built Haystack to help reviewers spend less time untangling code and more time giving meaningful feedback. We would love to hear about whether it gets the job done for you!

How we got here:

Haystack began as (yet another) VS Code fork where we experimented with visualizing code changes on a canvas. At first, it was a neat way to show how pieces of code worked together. But customers started laying out their entire codebase just to make sense of it. That’s when we realized the deeper problem: understanding a codebase is hard, and engineers need better ways to quickly understand unfamiliar code.

As we kept building, another insight emerged: with AI woven into workflows, engineers don’t always need to master every corner of a codebase to ship features. But in code review, deep and continuous context still matters, especially to separate what’s important to review from plumbing and follow-on changes.

So we pivoted. We took what we’d learned and worked closely with engineers to refine the idea. We started with simple code analysis (using language servers, tree-sitter, etc.) to show how changes relate. Then we added AI to explain and organize those changes and to trace how data moves through a pull request. Finally, we fused the two by empowering AI agents to use static analyses. Step by step, that became the Haystack we’re showing today.

We’d love to hear your thoughts, feedback, or suggestions!

Comments (13)

tkiolp4 · 1h ago
As I work more with AI, I’ve came to the conclusion that I have no patience to read AI-generated content, whether the content is right or wrong. I just feel like it’s time wasted. Countless of examples: meeting summaries (nobody reads them), auto generated code (we usually do it for prototypes and pocs, if it works, we ship it, no reviews. For serious stuff we take care of the code carefully), and a large etc.

I like AI on the producing side. Not so much on the consuming side.

akshaysg · 1h ago
That's fair! If there were a "minimal" mode where you could still access callers, data flows, and dependencies with no AI text, would it be helpful for your reviews?
ray__ · 57m ago
Not parent, but in my opinion the answer here is yes. I agree that there is a real need here and a potentially solid value proposition (which is not the case with a lot of vscode-fork+LLM-based starups) but the whole point should be to combat the verbosity and featurelessness of LLM-generated code and text. Using an LLM on the backend to discover meaningful connections in the codebase may sometimes be the right call but the output of that analysis should be some simple visual indication of control flow or dependency like you mention. At a first look the output in the editor looks more like an expansion rather than a distillation.

Unrelated, but I don't know why I expected the website and editor theme to be hay-yellow and or hay-yellow and black instead of the classic purple on black :)

akshaysg · 42m ago
Thanks for the opinion! That makes a lot of sense and I like the concept of being an extension of a user's own analysis vs hosing them with information.

Yeah originally I thought of using yellow/brown or yellow/black but for some reason I didn't like the color. Plenty of time to go back though!

Ethee · 35m ago
Products like these make me realize we're solving for the wrong problems with a lot of these AI solutions. I don't want you to take this as a hit to you or your product, I actually think it's extremely cool and will likely find a use. But from my perspective if this is a product you think you need, then you likely have a bigger organizational issue, as PRs are probably the last thing that I would want an AI 'intern' to organize for me.
akshaysg · 16m ago
> you likely have a bigger organizational issue

Could you expound on this? In my experience as a software engineer, a pull request could fall into one of two buckets (assuming it's not trivial):

1. The PR is not organized by the author so it's skimmed and not fully understood because it's so hard to follow along

2. The PR author puts a lot of time into organizing the pull request (crafting each commit, trying to build a narrative, etc.) and the review is thorough, but still not easy

I think organization helps the 1st case and obviates the need for the author to spend so much time crafting the PR in the 2nd case (and eliminates messy updates that need to be carefully slotted in).

Curious to hear how y'all handle pull requests!

irrationalfab · 13m ago
This nails a real problem. Non-trivial PRs need two passes: first grok the entrypoints and touched files to grasp the conceptual change and review order, then dive into each block of changes with context.
mclanett · 1h ago
Did not load, sad.

Failed to load resource: net::ERR_BLOCKED_BY_CLIENT ^ I'm not exactly sure what this is about. I think it is https://static.cloudflareinsights.com/beacon.min.js/vcd15cbe... which I would imagine is probably not necessary.

Uncaught TypeError: Cannot convert undefined or null to object at Object.keys (<anonymous>) at review/?pr_identifier=xxx/xxx/1974:43:12

These urls seem to be kind of revealing.

akshaysg · 1h ago
Trying to fix/find this! Could you email me the repo details? Very sorry for the error here.

In terms of auth: you should get an "unauthenticated" if you're looking at a repo without authentication (or a non-existent repo).

mclanett · 1h ago
I love this idea, trying it out now. There is QUITE a delay doing the analysis, which is reasonable, so I assume as a productionized (non-demo) release this will be async?
akshaysg · 1h ago
There's a GitHub app that you can install on your repo.

If you install and subscribe to the product, we create a link for you every time you make a pull request. We're working (literally right now!) on making it create a link every time you're assigned a review as well.

We'll also speed up the time in the future (it's pretty slow)!

sea-gold · 1h ago
Any ideas what pricing will look like?

What is your privacy policy around AI?

Any plans for a locally-runnable version of this?

akshaysg · 1h ago
1. Pricing would be $20 per person and we'd spin up an analysis for every PR you create/are assigned to review 2. We don't train or retain anything related to your codebase. We do send all the diffs to Open AI and/or Anthropic (and we have agentic grepping so it can see other parts of the codebase as well) 3. Do you mean the ability to run this on your own servers (or even on your own computer with local LLMs)? We do have plans for the former, but I don't know how big the lift on the latter will be so I'm not sure!