Ask HN: Privacy concerns when using AI assistants for coding?

6 Kholin 4 5/9/2025, 1:36:13 AM
I've recently seen some teams claim to use third-party AI assistants like Claude or ChatGPT for coding. Don't they consider it a problem to feed their proprietary commercial code into the services of these third-party companies?

If you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?

Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?

Comments (4)

apothegm · 18h ago
In theory, these companies all claim they don’t use data from API calls for training. Whether or not they adhere to that is… TBD, I guess.

So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.

baobun · 18h ago
Especially under current US administration and geopolitical climate?

Yeah, we're not doing that.

Also moved our private git repos and CIs to self-managed.

bhaney · 18h ago
> The AI would then have an in-depth understanding of your project's core architecture

God how I wish this were true

rvz · 18h ago
Don't forget that your env API keys are getting read and sent to Cursor, Anthropic, OpenAI and Gemini as well.