Broadly similar to what Apple is trying with their private compute work.
It's a great idea but the trust chains are so complex they are hard to reason about.
In "simple" public key encryption reasonably technically literate people can reason about it ("not your key, not your X") but with private compute there are many layers, each of which works in a fairly complex way and AFAIK you always end up having to trust a root source of trust that certifies the trusted device.
It's good in the sense it is trust minimization, but it's hard to explain and the cynicism (see HN comments similar to "you can't trust it because big tech/gov interference etc) means I am sadly pessimistic about the uptake.
I wish it wasn't so though. The cynicism in particular I find disappointing.
squigz · 2h ago
Why do you find it disappointing? It seems quite appropriate to me.
brookst · 1h ago
Not GP, but to me it is also disappointing because it’s just the old “if seatbelts don’t prevent car accidents, why wear them?” argument.
On the one hand you have systems where anyone at any company in the value chain can inspect your data ad hoc , with no auditing or notification.
On the other hand, you have systems that prevent casual security / privacy violations but could still be subverted by a state actor or the company that has the root of trust.
Neither is perfect. But it’s cynical and nihilistic to profess to see no difference.
Risk reduction should be celebrated. Those who see no value in it come across as zealots.
grugagag · 4h ago
> We’re sharing an early look into Private Processing, an optional capability that enables users to initiate a request to a confidential and secure environment and use AI for processing messages where no one — including Meta and WhatsApp — can access them.
What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.
justanotheratom · 3h ago
I don't understand the knee-jerk skepticism. This is something they are doing to gain trust and encourage users to use AI on WhatsApp.
WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.
mhio · 2h ago
What's the motive "to gain trust and encourage users to use AI on WhatsApp"? Meta aren't a charity. You have to question their motives because their motive is to extract value out of their users who don't pay for a service, and I would say that whatsapp has proven to be a harder place to extract that value than their other ventures.
btw whatsapp implemented the signal protocol around 2016.
justanotheratom · 1h ago
"motive is to extract value out of their users who don't pay for a service"
that is called a business.
if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla
ipsum2 · 3h ago
Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
ATechGuy · 20m ago
A few startups [1,2] also offer infra for private AI based on confidential computing from Nvidia and Intel/AMD.
We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..
..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.
asadm · 3h ago
I mean you are not forced to?
If a company is trying to move their business to be more privacy focused, at least we can be non-dismissive.
2Gkashmiri · 2h ago
So this is fb explaining how they are using your content from e2ee to cloud and back ? So not even Fb knows the content ?
Simple question. What if csam is sent to ai. Would it stop, or report to authorities or allow processing ? Same for bad stuff.
brookst · 1h ago
See: how Apple tried to solve this and generated massive outrage.
It's a great idea but the trust chains are so complex they are hard to reason about.
In "simple" public key encryption reasonably technically literate people can reason about it ("not your key, not your X") but with private compute there are many layers, each of which works in a fairly complex way and AFAIK you always end up having to trust a root source of trust that certifies the trusted device.
It's good in the sense it is trust minimization, but it's hard to explain and the cynicism (see HN comments similar to "you can't trust it because big tech/gov interference etc) means I am sadly pessimistic about the uptake.
I wish it wasn't so though. The cynicism in particular I find disappointing.
On the one hand you have systems where anyone at any company in the value chain can inspect your data ad hoc , with no auditing or notification.
On the other hand, you have systems that prevent casual security / privacy violations but could still be subverted by a state actor or the company that has the root of trust.
Neither is perfect. But it’s cynical and nihilistic to profess to see no difference.
Risk reduction should be celebrated. Those who see no value in it come across as zealots.
What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.
WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.
btw whatsapp implemented the signal protocol around 2016.
if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla
> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.
1. https://tinfoil.sh 2. https://www.privatemode.ai
..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.
If a company is trying to move their business to be more privacy focused, at least we can be non-dismissive.
Simple question. What if csam is sent to ai. Would it stop, or report to authorities or allow processing ? Same for bad stuff.