Show HN: Free local security checks for AI coding in VSCode, Cursor and Windsurf
We just launched Codacy Guardrails, an IDE extension with a CLI for code analysis and MCP server that enforces security & quality rules on AI-generated code in real-time. It hooks into AI coding assistants (like VS Code Agent Mode, Cursor, Windsurf), silently scanning and fixing AI-suggested code that has vulnerabilities or violates your coding standards, while the code it’s being generated.
We built this because coding agents can be a double-edged sword. They do boost productivity, but can easily introduce insecure or non-compliant code. One recent research team at NYU found that 40% of Copilot’s outputs were buggy or exploitable [1]. Other surveys mention that people are spending more time debugging AI-generated code [2].
That's why we created “guardrails” to catch security problems early.
Codacy Guardrails uses a collection of open-source static analyzers (like Semgrep and Trivy) to scan the AI’s output against 2000+ rules. We currently support JavaScript/TypeScript, Python, and Java, focusing on things like OWASP Top 10 vulns, hardcoded secrets, dependency checks, code complexity and styling violations, and you can customize the rules to match your project’s needs. We're not using any AI models, it's “classic” static code analysis working alongside your AI assistant.
Here’s a quick demo: https://youtu.be/pB02u0ntQpM
The extension is free for all developers. (We do have paid plans for teams to apply rules centrally, but that’s not needed to use the extension and local code analysis with agents.)
Setup is pretty straightforward: Install the extension and enable Codacy’s CLI and MCP Server from the sidebar.
We’re eager to hear what the HN community thinks! Does this approach sound useful in your AI coding workflow? Have you encountered security issues from AI-generated code?
We hope Codacy Guardrails can make AI-assisted development a bit safer and more trustworthy. Thanks for reading!
Get extension: https://www.codacy.com/get-ide-extension Docs: https://docs.codacy.com/codacy-guardrails/codacy-guardrails-...
Sources [1]: NYU Research: https://www.researchgate.net/publication/388193053_Asleep_at... [2]: https://devops.com/survey-ai-tools-are-increasing-amount-of-...
How do you avoid "context pollution" when the LLM inevitably cycles on an issue? I've specifically disable Cursor's "fix linter errors" feature because it constantly clogs up context.
How would you compare this to the Qlty CLI (https://github.com/qltysh/qlty)?
Do you plan to support CLI-based workflows for tools like Claude Code and linting?
Also, a big fat raspberry for their use of tinyurl to obfuscate https://marketplace.visualstudio.com/items?itemName=codacy-a... -- just cruel
Can you explain how/when the "guardrails" are run in Cursor? I mean: how does the extension hook in so that the code in the diff view gets changed?
Does this also work with agents like Claude Code and Amp? I guess since there is an MCP it can already work even though it's not explicitly mentioned in the docs?
What are your thoughts on running something like guardrails during dev-time vs CI time?
The guardrails are ran every time there is code being generated by the agent. We give instructions to the coding agents to run the guardrails on the code that is changed. It doesn't YET work with Claude Code and Amp but because it leverages an MCP server, we can easily do it. It's in the plans to do.
I think dev-time is critical, because AI is producing large swaths of code as we speak. We also make sure that regardless of what happens in dev time, we can always run our cloud checks in CI time. Thanks for your questions!
Thanks for testing. Please do share your feedback when you test further!