Ask HN: What are your thoughts on LLMs and user privacy?

4 eniz 3 5/15/2025, 11:08:48 AM
I'm increasingly curious about user privacy when it comes to hosted LLM usage. Many popular LLM apps and APIs require sending your prompts, messages, and potentially sensitive information to remote servers for processing—sometimes routed through multiple third-party providers.

- How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?

- For those working on or deploying LLM applications, what approaches do you take to maximize user privacy?

- Do you think end users are generally aware of where their data is going and how it's being used, or is this still an overlooked issue?

I'd love to hear your perspectives, experiences, or any best practices you recommend on privacy when deploying LLM-powered use cases.

Comments (3)

Bender · 9h ago
I think combining LLM's with big-data was the most genius way to extract highly secret and sensitive information from individuals, intellectual property from companies, secrets from governments and military operations, pillow talk from executives and politicians. I could not have come up with a better way to burn it all down given all of these secrets will "leak" at some point. I fully expect it to become a master of manipulation especially for those that think their LLM is their significant other. Social media manipulation algorithms will be supplanted by AI.
JohnFen · 13h ago
> How much do you trust major LLM providers (OpenAI, Anthropic, Google, etc.) with your data?

I have zero trust in these companies on this count, and that's the main reason why I avoid using products that incorporate "AI".

scarface_74 · 49m ago
But you trust Google, Gmail and a host of other online service providers? How do you know that Google isn’t using your email to train? You either have to trust your service providers or don’t use them.

This is like the early days when people didn’t trust buying things over the internet