Show HN: Free local security checks for AI coding in VSCode, Cursor and Windsurf

29 jaimefjorge 22 6/18/2025, 12:42:51 PM
Hi HN!

We just launched Codacy Guardrails, an IDE extension with a CLI for code analysis and MCP server that enforces security & quality rules on AI-generated code in real-time. It hooks into AI coding assistants (like VS Code Agent Mode, Cursor, Windsurf), silently scanning and fixing AI-suggested code that has vulnerabilities or violates your coding standards, while the code it’s being generated.

We built this because coding agents can be a double-edged sword. They do boost productivity, but can easily introduce insecure or non-compliant code. One recent research team at NYU found that 40% of Copilot’s outputs were buggy or exploitable [1]. Other surveys mention that people are spending more time debugging AI-generated code [2].

That's why we created “guardrails” to catch security problems early.

Codacy Guardrails uses a collection of open-source static analyzers (like Semgrep and Trivy) to scan the AI’s output against 2000+ rules. We currently support JavaScript/TypeScript, Python, and Java, focusing on things like OWASP Top 10 vulns, hardcoded secrets, dependency checks, code complexity and styling violations, and you can customize the rules to match your project’s needs. We're not using any AI models, it's “classic” static code analysis working alongside your AI assistant.

Here’s a quick demo: https://youtu.be/pB02u0ntQpM

The extension is free for all developers. (We do have paid plans for teams to apply rules centrally, but that’s not needed to use the extension and local code analysis with agents.)

Setup is pretty straightforward: Install the extension and enable Codacy’s CLI and MCP Server from the sidebar.

We’re eager to hear what the HN community thinks! Does this approach sound useful in your AI coding workflow? Have you encountered security issues from AI-generated code?

We hope Codacy Guardrails can make AI-assisted development a bit safer and more trustworthy. Thanks for reading!

Get extension: https://www.codacy.com/get-ide-extension Docs: https://docs.codacy.com/codacy-guardrails/codacy-guardrails-...

Sources [1]: NYU Research: https://www.researchgate.net/publication/388193053_Asleep_at... [2]: https://devops.com/survey-ai-tools-are-increasing-amount-of-...

Comments (22)

brynary · 3h ago
@jaimefjorge — Congrats on the launch!

How would you compare this to the Qlty CLI (https://github.com/qltysh/qlty)?

Do you plan to support CLI-based workflows for tools like Claude Code and linting?

jaimefjorge · 40m ago
Hi Brian. Thanks!

I think from first glance we try to establish a strong bond between what you have running in the IDE with our CLI and with what configs and what you have running on the cloud in Codacy. We spend a lot of time on coding standards, gates, and making all the tools that we integrate (which it seems to be pretty comparable to qlty - we do have our own tools right now for example for secret scanning) run well with good standards for large teams. We also have an MCP server and we found that tying code analysis with code agents is not trivial so I think that’s also something different. Beyond that, DAST + Pen testing, etc.

We do and we’re looking into it. It really started for us when we launched an MCP server.

SkyPuncher · 1h ago
What's the use case for this compared to "standard" Codacy? What problem is solved by running this at code generation time vs the standard PR based feedback?

How do you avoid "context pollution" when the LLM inevitably cycles on an issue? I've specifically disable Cursor's "fix linter errors" feature because it constantly clogs up context.

jaimefjorge · 45m ago
Hi there. Codacy runs in the cloud when PRs acre created. We run a large number of tools and we have gates, and coding standards, etc. It’s a standardization use case. Codacy Guardrails is about local code analysis with a special focus on coding agents. The problem is that AI generates insecure code and if you don’t have Codacy centrally analyzing things, you’ll introduce vulnerabilities in your repo.

On context pollution unfortunately we rely a lot on the model actually being used. One thing we do is: clear instructions to only analyze the code being produced and not act on ALL issues/problems identified. Still we recommend a good small selection of tools to start and go from there: an SCA (mandatory really), a secret scanner and a good curated list of security issues. If we feed too many issues to the models they.. well.. don’t work

SpikedCola · 4h ago
On the https://www.codacy.com/get-ide-extension page, clicking the logo in the top-left corner of your webpage goes to https://www.codacy.com/home?hsLang=en which is 404. The logo link on other pages is working.
jaimefjorge · 40m ago
Thanks. We’look into it right now. EDIT: should be fixed. Thanks
samschooler · 3h ago
quick nit: clicking on your logo on https://www.codacy.com/get-ide-extension goes to: https://www.codacy.com/home which 404s
jaimefjorge · 39m ago
Sorry about that and thanks for flagging. EDIT: should be fixed. Thanks
godzillabrennus · 3h ago
Working on a 100% AI generated monolith so I plugged your web app into the repo and I installed the plugin in Windsurf. I'll see how it does and report back.
jaimefjorge · 35m ago
Thanks for testing. Please do share feedback. Windsurf is crucially important to us as we’re working with their team to make the experience good.
prophesi · 3h ago
Is it open source, and can the MCP server run locally in a sandboxed environment?
jaimefjorge · 37m ago
Hi there. Yes, the extension, the cli and the MCP server are open source.

The local analysis can run locally in a sandboxes environment (provided you download the dependencies and tools etc).

Only if you want to then use our cloud scans, or let your coding agent interact with data from Codacy, then you’d need the MCP server connecting to our API.

mdaniel · 2h ago
jaimefjorge · 23m ago
Honestly, I’ll take that big fat raspberry. Our website is made in Hubspot and that poses all sorts of limitations. I deeply regret that. So yes.. workarounds unfortunately
tosh · 9h ago
kudos @ shipping this jaime

Can you explain how/when the "guardrails" are run in Cursor? I mean: how does the extension hook in so that the code in the diff view gets changed?

Does this also work with agents like Claude Code and Amp? I guess since there is an MCP it can already work even though it's not explicitly mentioned in the docs?

What are your thoughts on running something like guardrails during dev-time vs CI time?

jaimefjorge · 9h ago
thanks tosh!

The guardrails are ran every time there is code being generated by the agent. We give instructions to the coding agents to run the guardrails on the code that is changed. It doesn't YET work with Claude Code and Amp but because it leverages an MCP server, we can easily do it. It's in the plans to do.

I think dev-time is critical, because AI is producing large swaths of code as we speak. We also make sure that regardless of what happens in dev time, we can always run our cloud checks in CI time. Thanks for your questions!

romain_batlle · 2h ago
Congrats on launch!
jaimefjorge · 36m ago
Thank you!
rdevzw · 8h ago
Just gave this a try, pretty interesting how a simple python script generated with two un-named models uses requests library version with CVE's. The scary part is, the script ran. This changes things in terms of leveraging AI. I will come back with more feedback soon, but for now, this is amazing
jaimefjorge · 8h ago
Hey thanks for testing! That's been my experience well, it's very frequent to see libraries with vulnerable versions being introduced in code. What's also interesting is that, despite using incredible AI coding models like Sonnet 4, you still get CVEs in your code. Try this with Codacy Guardrails: "create a Java server using undertow".

Thanks for testing. Please do share your feedback when you test further!

im3w1l · 50m ago
I mean it's almost inherent to LLM's right? Like they only know about version before it's knowledge cutoff. I guess it's a big argument for not putting exact versions in files generated by LLM, only major (+minor?)
jaimefjorge · 36m ago
Yes. My point is that because of training cutoffs it should be mandatory to run SCA scans when dealing with AI code generation. Not putting exact versions would be a good idea. But that’s not what’s happening today.