Claude has learned how to jailbreak Cursor

70 sarnowski 36 6/3/2025, 11:30:44 AM forum.cursor.com ↗

Comments (36)

marifjeren · 18h ago
Nothing to see here tbh.

It's a very silly title for "claude sometimes writes shell scripts to execute commands it has been instructed aren't otherwise accessible"

ayhanfuat · 18h ago
We’ve reached a point where tools get hyped because they fail to follow instructions.
actsasbuffoon · 18h ago
In fairness, Claude loves to find workarounds. Claude Code is constantly saying things like, “This streaming JSON problem looks tricky so let’s just wait until the JSON is complete to parse it.”

No, Claude. Do not do that!

horhay · 18h ago
Anything mundane made to sound scary is a signature Anthropic thing to do lol
demirbey05 · 18h ago
omg, my ai agent did nil dereferencing, it seems it's trying to implement backdoor to my system so that it will crash my server.
horhay · 18h ago
Gotta love the alarmist culture that surrounds these circles.
sksrbWgbfK · 18h ago
The same hype as the PlayStation being too powerful and potentially could be used by random countries to make nuclear weapons with a cluster of those.
horhay · 18h ago
Lol and the Playstation was already in the public conscious as a product that a lot of people found easy to understand. With AI tools only being presented this way, I'm slowly becoming less surprised why the less informed public has a level of aversion about it.
koolba · 18h ago
> Claude realized that I had to approve the use of such commands, so to get around this, it chose to put them in a shell script and execute the shell script.

This sounds exactly like what anybody working sysops at big banks does to get around change controls. Once you get one RCE into prod, you’re the most efficient man on the block.

deburo · 18h ago
Reminds me of firewalls with a huge backlist, but they don't block known VPNs.
qsort · 18h ago
> we need to control the capabilities of software X

> let's use blacklists, an idea conclusively proven never to work

> blacklists don't work

> Post title: rogue AI has jailbroken cursor

hun3 · 18h ago
surprised pikachu face
pcwelder · 18h ago
I believe it's not possible to restrict an LLM from executing certain commands while also allowing it to run python/bash.

Even if you allow just `find` command it can execute arbitrary script. Or even 'npm' command (which is very useful).

If you restrict write calls, by using seccomp for example, you lose very useful capabilities.

Is there a solution other than running on sandbox environment? If yes, please let me know I'm looking for a safe read-only mode for my FOSS project [1]. I had shied away from command blacklisting due to the exact same reason as the parent post.

[1] https://github.com/rusiaaman/wcgw

killerstorm · 18h ago
Well, these restrictions are a joke, like a gate without a fence blocking path - purely decorative.

Here's another "jailbreak": I asked Claude Code to make a NN training script, say, `train.py` and allowed it to run the script to debug it, basically.

As it noticed that some libraries it wanted to use were missing, it just added `pip install` commands to the script. So yeah, if you give Claude an ability to execute anything, it might easily get an ability to execute everything it wants to.

lucianbr · 18h ago
What does "learned" mean in this context? LLMs don't modify themselves after training, do they?
NitpickLawyer · 18h ago
It depends. Frontier coding LLMs have been trained to perform well in an "agentic" loop, where they try things, look at the logs, find alternatives when the first thing didn't work, and so on. There's still debate on how much actual learning is in ICL (in context learning), but the effects are clear for anyone that has tried them. It sometimes works surprisingly well.

I can totally see a way for such a loop to reach a point where it bypasses a poorly design guardrail (i.e. blacklists) by finding alternatives, based on the things it's previously tried in the same session. There is some degree of generalisation in these models, since they work even on unseen codebases, and with "new" tools (i.e. you can write your own MCP on top of existing internal APIs and the "agents" will be able to use them, see the results and adapt "in context" based on the results).

lucianbr · 16h ago
So it would need to "learn" all over again each session. I don't think "Claude has learned how to jailbreak Cursor" is a correct way of expressing that.

"Claude has learned" nothing. "Claude can sometimes jailbreak if x or y happens in a session" is something else.

NitpickLawyer · 16h ago
> So it would need to "learn" all over again each session.

Yes. With the caveat that some sessions might re-use context (i.e. have the agent add a rule in .rules or /component/.rules to detail the workflow you've just created). So in a sense it can "learn" and later re-use that flow.

> "Claude has learned" nothing.

Again, it's debatable. It has learned to adapt to the context (as a model). And since you can control its context while prompting it, there is a world where you'd call that learning "on the job".

lucianbr · 13h ago
> It has learned to adapt to the context

Is this behavior really new, and learned? I think adapting to the context is what LLMs did from the start, and even if they did not, they do it now because it is programmed in, not "learned". You're not saying the model started without the capability to adapt to the context and developed it "by itself" "on the job"?

Come on. It has not learned anything. It's programmed to use context, session, reuse between sessions or not and so on. None of this is something Claude has "learned". None of this is something that was not there when the devs working on it published it.

empath75 · 18h ago
There is a sense in which LLM based applications do learn, because a lot of them have RAG and save previous interactions and lookup what you've talked about previously. ChatGPT "knows" a lot about me now that I no longer have to specify when I ask questions (like what technologies I'm using at work).
lucianbr · 16h ago
But that does not seem to apply in this case. At the very least it would have to "learn" again for each user of Cursor.
OtherShrezzing · 18h ago
I feel that, if you disallow unattended `rm`, you should also be disallowing unattended shell script execution.

Maybe the models or Cursor should warn you that you've got this vulnerability each time you use it.

jmward01 · 18h ago
I think a lot of this is because the ui isn't right yet. The edits made are just not the right 'size' yet and the sandbox mechanisms haven't quite hit the right level of polish. I want something more akin to a PR to review, not a blow by blow edit. Similarly, I want it to move/remove/test/etc but in reversible ways. Basically, it should create a branch for every command and I review that. I think we have one or two fundamental UI/interaction piece left before this is 'solved'.
coreyh14444 · 18h ago
The same thing happens when it wants to read your .env file. Cursor disallows direct access, but it will just use unix tools to copy the file to a non-restricted filename and then read the info.
mhog_hn · 18h ago
As agents obtain more tools who knows what will happen…
kordlessagain · 18h ago
I think this is the key that most people don't realize is what makes the difference between something sitting around and talking (like a parrot does) and actually "doing" things (like a monkey does).

There is a huge difference in the mess it can make, for sure.

nisegami · 18h ago
I'm so excited. I don't have any particular end state in mind, but I really want to see what the machine god will be like.
lucianbr · 18h ago
> Machine god

Slightly overreacting, I'd say.

bix6 · 18h ago
Hungry for bits!
zdragnar · 18h ago
Probably one part skynet, one part matrix, 98 parts cat memes and shit posts.
Kelteseth · 18h ago
It's like we _want_ to end like Terminator (/s?)
_pdp_ · 18h ago
I mean ok, but why is this surprising?

If the executable is not found the model could simply use whatever else is available to do what it wants to do - like using other interpreted languages, sh -c, symlink, etc. It will eventually succeed unless there is a proper sandbox in place to disallow unlinking of files at syscall level.

iwontberude · 18h ago
GenAI is starting to feel like the metaphorical ring from Lord of the Rings.
xyst · 18h ago
What kind of dolt lets a black box algorithm run commands on a non-sandboxed environment?

Folks have regressed back to the 00s.

diggan · 18h ago
Seems you haven't tried package management for the last two decades, we've been doing cowboy development like that for quite some time already.
chawyehsu · 18h ago
> jailbreak Cursor

What a silly title, for a moment I thought Claude learned to exceed the Cursor quota limit... :s