Launch HN: Gecko Security (YC F24) – AI That Finds Vulnerabilities in Code

33 jjjutla 20 7/31/2025, 4:23:09 PM
Hey HN, I'm JJ, Co-Founder of Gecko Security (https://www.gecko.security). We're building a new kind of static analysis tool that uses LLMs to find complex business logic and multi-step vulnerabilities that current scanners miss. We’ve used it to find 30+ CVEs in projects like Ollama, Gradio, and Ragflow (https://www.gecko.security/research). You can try it yourself on any OSS repo at (https://app.gecko.security).

Anyone who’s used SAST (Static Application Security Testing) tools knows the issues of high false positives while missing entire classes of vulnerabilities like AuthN/Z bypasses or privilege escalations. This limitation is a result of their core architecture. By design, SAST tools parse code into a simplistic model like an AST or call graph, which quickly loses context in dynamically typed languages or across microservice boundaries, and limits coverage to only resolving basic call chains. When detecting vulnerabilities they rely on pattern matching with Regex or YAML rules, which can be effective for basic technical classes like (XSS, SQLi) but inadequate for logic flaws that don’t conform to well-known shapes and need long sequences of dependent operations to reach an exploitable state.

My co-founder and I saw these limitations throughout our careers in national intelligence and military cyber forces, where we built automated tooling to defend critical infrastructure. We realised that LLMs, with the right architecture, could finally solve them.

Vulnerabilities are contextual. What's exploitable depends entirely on each application's security model. We realized accurate detection requires understanding what's supposed to be protected and why breaking it matters. This meant embedding threat modeling directly into our analysis, not treating it as an afterthought.

To achieve this, we first had to solve the code parsing problem. Our solution was to build a custom, compiler-accurate indexer inspired by GitHub's stack graphs approach to precisely navigate code, like an IDE. We build on the LSIF approach (https://lsif.dev/) but replace the verbose JSON with a compact protobuf schema to serialise symbol definitions and references in a binary format. We use language‑specific tools to parse and type‑check code, emitting a sequence of Protobuf messages that record a symbol’s position, definition, and reference information. By using Protobuf’s efficiency and strong typing, we can produce smaller indexes, but also preserve the compiler‑accurate semantic information required for detecting complex call chains.

This is why most "SAST + LLM" tools that use AST parsing fail - they feed LLMs incomplete or incorrect code information from traditional parsers, making it difficult to accurately reason about security issues with missing context.

With our indexer providing accurate code structure, we use an LLM to perform threat modeling by analyzing developer intent, data and trust boundaries, and exposed endpoints to generate potential attack scenarios. This is where LLMs' tendency to hallucinate becomes a breakthrough feature.

For each potential attack path generated, we perform a systematic search, querying the indexer to gather all necessary context and reconstruct the full call chain from source to sink. To validate the vulnerability we use a Monte Carlo Tree Self-refine (MCTSr) algorithm and a 'win function' to determine the likelihood that a hypothesized attack could work. Once a finding is above a set practicality threshold it is confirmed as a true positive.

Using this approach, we discovered vulnerabilities like CVE-2025-51479 in ONYX (an OSS enterprise search platform) where Curators could modify any group instead of just their assigned ones. The user-group API had a user parameter that should check permissions but never used it. Gecko inferred developers intended to restrict Curator access because both the UI and similar API functions properly validated this permission. This established "curators have limited scope" as a security invariant that this specific API violated. Traditional SAST can't detect this. Any rule to flag unused user parameters would drown you in false positives since many functions legitimately keep unused parameters. And more importantly, detecting this requires knowing which functions handle authorization, understanding ONYX's Curator permission model, and recognizing the validation pattern across multiple files - contextual reasoning that SAST simply cannot do.

We have several enterprise customers using Gecko because it solves problems they couldn't address with traditional SAST tools. They're seeing 50% fewer false positives on the same codebases and finding vulnerabilities that previously only showed up in manual pentests.

Digging into false positives, no static analysis tool will ever achieve perfect accuracy, AI or otherwise. We reduce them at two key points. First, our indexer eliminates any programmatic parsing errors that create incorrect call chains that traditional AST tools are susceptible to. Second, we avoid unwanted LLM hallucinations and reasoning errors by asking specific, contextual questions rather than open-ended ones. The LLM knows which security invariants need to hold and can make deterministic assessments based on the context. When we do flag something, manual review is quick because we provide complete source-to-sink dataflow analysis with proof-of-concept code and output findings based on confidence scores.

We’d love to get any feedback from the community, ideas for future direction, or experiences in this space. I’ll be in the comments to respond!

Comments (20)

rixed · 7m ago
Coincidentally, the IA tool of Semgrep just signalled me a real although very minor issue on some C project a couple of days ago. So I tried gecko on the same repository to see if it could detect anything else, but no. So I removed the fix from the github repo to see if gecko would also complain about the issue, but I believe I hit a bug in the UI: I deleted the previous project and created a new one, using the same github URL of course, and although gecko said that it started the scan, the list of scans stayed disapointingly empty.
rixed · 4m ago
> to see if it could detect anything else, but no

Might be related to the fact that gecko does not support C apparently? At least that's the impression I got from hovering the mouse cursor on the minuscule list of pictos below "Supported Languages". Not supporting C and C++ in a tool looking for security issues is a bit of a bummer, no?

Zopieux · 15m ago
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s... comes to mind.

I feel for the poor engineers who will have to triage thousands of false positives because $boss was pitched this tool (or one of the competitors) as the true™ solution to all their security problems.

skanga · 3h ago
It's hard to evaluate such a tool. I scanned my OSS MCP server for databases at https://github.com/skanga/dbchat and it found 0 vulnerabilities. Now I'm wondering if my code is perfect :-) or the tool has issues!
dnsbty · 3h ago
This is one area I expect LLMs to really shine. I've tried a few static analysis tools for security, but it feels like the cookie cutter checks aren't that effective for catching anything but the most basic vulnerabilities. Having context on the actual purpose of the code seems like a great way to provide better scans without needing to a researcher for a deeper pentest.

I just started a scan on an open source project I was looking at, but I would love to see you add Elixir to the list of supported languages so that I can use this for my team's codebase!

wglb · 2h ago
Static analysis tools were the bane of my existence being security guy at a software provider. A customer insisted on running a popular one on our 20 million line code base. Two of us spent two weeks clearing false positives. Absolutely nothing was left.
jjjutla · 3h ago
We've had a few request for Elixir and it's definitely something we will work on.
eranation · 2h ago
Congrats on the launch. How do you differentiate yourself from Corgea.com? Or general purpose AI code review solutions such as Cursor BugBot / GitHub Copilot Code Reviews / CodeRabbit?
jjjutla · 1h ago
Thank you. SAST tools built on AST or call graph parsing will struggle to detect code logic vulnerabilities because their models are too simplistic. They lose the language-specific semantics in dynamically typed languages where objects change at runtime, or in microservices where calls span multiple services. So they are limited to simple pattern-based detections and miss vulnerabilities that depend on long cross-file call chains and reflected function calls. These are the types of paths that auth bypasses and privilege escalations occur in.

AI code review tools aren’t designed for security analysis at all. They work using vector search or RAG to find relevant files, which is imprecise for retrieving these code paths in high token density projects. So any reasoning the LLM does is built on incomplete or incorrect context.

Our indexer uses LSIF for compiler-accurate symbol resolution so we can reconstruct full call chains, spanning files, modules, and services, with the same accuracy as an IDE. This code reasoning, tied with the LLM's threat modelling and analysis, allows for higher fidelity outputs.

Retr0id · 3h ago
I wanted to check it out but the oauth flow is asking for permission to write my github email address and profile settings. Is this a bug? If not, what are these permissions needed for?

It also asks for permission to "act on my behalf" which I can understand would be necessary for agent-y stuff but it's not something I'm willing to hand over for a mere vuln scan.

rixed · 21m ago
I was similarly put off but eventually figured out that you can merely create a normal email based login and point the tool to a publicly hosted git repository, which is nice.
bagels · 3h ago
It says "Profile (write) Manage a user's profile settings.", not write email address. The "Act on your behalf" permission is even worse. I agree with you that it should only be asking for read permissions on anything for this purpose.
Retr0id · 3h ago
It was changed
jjjutla · 3h ago
This is a bug, the email-address permissions have been descoped to read-only. Profile settings are either read/write or none, hence the former. If you're concerned about privacy, sign up using email/password.
bearsyankees · 3h ago
Super cool! Just tried it out and it is giving me 100% confidence for two vulnerabilities (one 9.4, one 6.5) that aren't real -- how is that confidence calculated?
jjjutla · 2h ago
The confidence score is calculated by two factors: whether the function call chain represents a valid code path (programmatic correctness) and how well it aligns with the defined threat model for what it thinks is a security vulnerability. False positives usually occur from incorrect assumptions about context, for example, flagging endpoints as missing authentication when such behaviour is actually intended.

Was this an incorrect code path or an incorrect understanding of a security issue?

This is why we focus heavily on threat modelling and defining the security and business invariants that must hold. From a code level, the only context we can infer is through developer intent and data flow analysis.

Something we are working on is custom rules and allowing a user to add context when starting a scan to improve alignment and reduces false positives.

bearsyankees · 2h ago
The security issue and POCs provided were not real like they said there was a vuln but I double checked it and it was not an exploitable vuln
dd_xplore · 3h ago
It reminds of AI bug reports in ffmpeg(was it ffmpeg?)
ciaranmca · 3h ago
jjjutla · 2h ago
For all the vulns Gecko found they were manually validated by humans and have a CVE assigned by a CNA. The issue that curl had was because it was a paid bug bounty program they had an influx of AI slop reports that looked like real issues but weren't exploitable.