Show HN: Basin MCP – Stops code gen hallucinations
Hey HN — I’m Henry, co-founder of Basin, and I wanted to share something we’ve been working on called Basin MCP, a kind of QA infrastructure for AI coding copilots like Cursor, Windsurf, etc.
We built Basin because we kept noticing the same thing: agentic coding tools are great at generating code, but have no idea if the thing they built actually works — or even matches the original intent, and so they're prong to hallucinations. Basically, they have no feedback loop, and no accountability, leading to serious bugs when hallucinations happen.
So we asked: what if they could call in help — the way a real dev would call in QA?
That’s what Basin MCP does. It lets copilots trigger full QA workflows on the code they generate, using real-world testing patterns — HTTP requests, flows, even email notifications. You can say things like “and test this with Basin MCP,” and it’ll handle the rest: validating output, simulating inputs, catching regressions, etc.
It’s still early, but here’s what it can already do:
- It works with any copilot (as long as MCP hooks are enabled) - It can test inter-service flow checks, including email and webhooks - You can define what should and shouldn’t be tested with natural language, just use your chat prompt as the main interface
There’s no UI or plugin — you just talk to your copilot like normal, and add ...and test with Basin MCP
We’re not trying to build “AI QA for humans.” We’re building QA for AIs — so they can hold themselves to a higher standard. You can think of it as the copilot’s QA teammate.
We're in beta now. If you want to try it out, you can use the API key hackernews, instruction at https://mcp.basin.ai. Would love to hear your thoughts — especially if you're a serious vibe coder and you care alot about quality of the vibe coded software.
Happy to answer any questions and go deep on any of the above!
No comments yet