We're seeing more companies deploy AI agents that generate and execute code in production environments. But I'm curious about the security and governance challenges this creates.
For those running AI-generated code in production:
What's your biggest security concern? (prompt injection, unintended data access, compliance issues?)
How do you currently handle security scanning and approval workflows?
Have you had any close calls or incidents?
What tools/processes do you wish existed?
I'm researching this space and would love to hear about real-world experiences. There seems to be a gap between traditional SAST tools and the unique risks of AI-generated code.
For those running AI-generated code in production:
What's your biggest security concern? (prompt injection, unintended data access, compliance issues?)
How do you currently handle security scanning and approval workflows?
Have you had any close calls or incidents?
What tools/processes do you wish existed?
I'm researching this space and would love to hear about real-world experiences. There seems to be a gap between traditional SAST tools and the unique risks of AI-generated code.
If you're interested in sharing more detailed thoughts, I put together a quick survey: https://buildpad.io/research/EGt1KzK