Hidden inside the article is another term that I think we'll start to hear a lot more in the coming days: "VibeScamming"
benterix · 43m ago
This should hit the headlines.
I was always of the opinion that AI of all kinds is not a threat unless someone decides to connect it to an actuator so that it has direct and uncontrolled effect on the external world. And now it's happening en masse with agents, MCPs etc. I don't even mention things we don't know about (military and other classified projects).
jtc331 · 1h ago
I appreciate that the article correctly points out the core design flaw here of LLMs is the non-distinction between content and commands in prompts.
It’s unclear to me if it’s possible to significantly rethink the models to split those, but it seems that that is a minimal requirement to address the issue holistically.
Dilettante_ · 1h ago
>"Scamlexity" - a new era of scam complexity
ಠ_ಠ
Terr_ · 1h ago
Yeah, I don't think their attempt to coin a word there is going to work.
I was always of the opinion that AI of all kinds is not a threat unless someone decides to connect it to an actuator so that it has direct and uncontrolled effect on the external world. And now it's happening en masse with agents, MCPs etc. I don't even mention things we don't know about (military and other classified projects).
It’s unclear to me if it’s possible to significantly rethink the models to split those, but it seems that that is a minimal requirement to address the issue holistically.
ಠ_ಠ