Show HN: Wispbit – Keep codebase standards alive
With the help of AI coding tools, engineers are writing more code than ever. Code output has increased, but the tooling to manage this hasn't improved. Background agents still write bad code, and your IDE still writes slop without the right context.
So we built wispbit. It works by scanning your codebase for patterns you already use, and coming up with rules. Rules are kept up to date as standards change, and you can edit rules any time.
You can enforce these rules during code review, and because we have this rules system, you can bring them as an MCP into your coding IDE so that your agents produce more accurate code. You can think of it as a portable rules file that you can bring anywhere.
We put a lot of work into making a system that produces good rules and avoids slop. For repository crawling, we have an agent that dispatches subagents, similar to Anthropic's research agent. These subagents will go through and look for common patterns within modules and directories, and report back to the main agent, which synthesizes the results. We also do a historical scan on your pull request comments, determine which ones were addressed, filter out comments that wouldn't make a good rule, and use that to create or update rules.
Our early users are seeing 80%+ resolution rates, meaning that 80% of comments that wispbit makes are resolved.
Long-term, we see ourselves being a validation layer for AI-written code. With tools like Devin and Cursor, we find ourselves having to re-prompt the same solution many times. We still don't know the long-term implications on AI-assisted codebases, so we want to get in front of that as soon as possible.
We've opened up signups for free to HN folks at https://wispbit.com. We're also around to chat and answer questions!
No comments yet