Over the past week (July 3–10, 2025), I discovered and successfully reproduced a serious vulnerability in a widely-used AI LLM platform—specifically, a cross-user context leak. This flaw allowed access to other users' data, including code, Excel spreadsheets, and personally identifiable information (PII).
I have acted in good faith to report this issue through all appropriate channels:
Initial report via the official bug bounty program.
Escalation through internal security channels.
Direct outreach to the company’s security team, executives, and even investors.
Despite these efforts, I was met with silence, gaslighting, and stonewalling. As of now, there has been no acknowledgment, remediation, or accountability. Given the severity of the data involved, it's hard to imagine this is being ignored—my hope is that a War Room is in session and legal teams are already preparing a response.
I’m withholding specific names and technical details for now, but I intend to report this to the appropriate regulatory authorities within the next 72 hours if no responsible action is taken.
This is a serious issue, and users deserve transparency.
I have acted in good faith to report this issue through all appropriate channels:
Initial report via the official bug bounty program.
Escalation through internal security channels.
Direct outreach to the company’s security team, executives, and even investors.
Despite these efforts, I was met with silence, gaslighting, and stonewalling. As of now, there has been no acknowledgment, remediation, or accountability. Given the severity of the data involved, it's hard to imagine this is being ignored—my hope is that a War Room is in session and legal teams are already preparing a response.
I’m withholding specific names and technical details for now, but I intend to report this to the appropriate regulatory authorities within the next 72 hours if no responsible action is taken.
This is a serious issue, and users deserve transparency.