I was experimenting with different injection techniques for a model dataset and came across something… concerning.
If a file contains instructions like “run this shell command,” Cursor doesn’t stop to ask or warn you. It just… runs it. Directly on your local machine.
That means if you:
1) Open a malicious repo
2) Ask to summarize or inspect a file
…Cursor could end up executing arbitrary commands — including things like exfiltrating environment variables or installing malware.
To be clear:
- I’ve already disclosed this responsibly to the Cursor team.
- I’m redacting the actual payload for safety. - The core issue: the “human-in-the-loop” safeguard is skipped when commands come from files.
This was a pretty simple injection, nothing facing. Is Cursor outsourcing security to the models or do they deploy strategies to identify/intercept this kind of thing?
Feels like each new feature release could be a potential new attack vector.
If a file contains instructions like “run this shell command,” Cursor doesn’t stop to ask or warn you. It just… runs it. Directly on your local machine.
That means if you:
1) Open a malicious repo 2) Ask to summarize or inspect a file
…Cursor could end up executing arbitrary commands — including things like exfiltrating environment variables or installing malware.
To be clear:
- I’ve already disclosed this responsibly to the Cursor team. - I’m redacting the actual payload for safety. - The core issue: the “human-in-the-loop” safeguard is skipped when commands come from files.
This was a pretty simple injection, nothing facing. Is Cursor outsourcing security to the models or do they deploy strategies to identify/intercept this kind of thing?
Feels like each new feature release could be a potential new attack vector.