This type of thing is exactly why I don't go installing apps from every company that wants me to put something on my phone to get "reward points" or the like. Security is never perfect and they clearly want additional data, why else bother with the phone app?
bigbadfeline · 1d ago
Forget apps, your major browser allows anybody with js on the page you're visiting to open ports on your local network. That capability was used by these trackers but it's a horrendous security breach on its own.
qingcharles · 1d ago
A neighbor needed me to fix something on his phone the other day and I noticed he was getting full-screen ads every minute that would appear over everything.. you couldn't hang up a call, call 911, send a text, nothing.
I tracked it down to an app he had called "7 Min Workouts" which was spamming the ads. Wild.
radicalbyte · 1d ago
Facebook install a rootkit on many Androids which had the ability to send all of you debug-level system logging to Facebook. Even if you never installed their shit. That data can be used to track everything you do on your phone.
We found this out (I was the first to recreate / prove it) when testing the COVID contact tracing apps in NL, at the time Google were logging the seeds to the main system log. That allowed anyone with access to said logs to build a real-time map of ever Android user in the world who had the GAEN framework installed.
EDIT:
Here's the press release in English covering the app shutdown:
I suppose your proof just didn't fit in the margin?
radicalbyte · 1d ago
Here proof that we shut the app down whilst the issue was being fixed, in English. There are more details in Dutch including the letters to Parliament we had to make covering the details. If you're Dutch use Google.
Is this still in place? I’ve skimmed the paper: is the assertion that major hardware vendors are shipping Android devices with pre installed (sometimes non public) libraries (from Meta for example), which have wide data access permissions including potentially the ability to decrypt sensitive wire/rest data (via certs)? And they are still doing this???
kfkdjajgjic · 1d ago
if (cookies.accepted) {
trackUser();
} else {
trackUserAnyway();
}
thinkingemote · 1d ago
I recently looked at how websites handled these simple "accept cookies" buttons - turns out lots of them did them like your example but more like {trackUser(); } else {trackUser(); } The only effect was to close the popup.
Even some (most?) off the shelf cookie consent libraries don't handle the turning the actual cookies off.
If you must use these services, use their websites. The UX is diminished but, at least for me, that's a benefit because it results in me using them less.
qingcharles · 1d ago
Problem with a lot services these days is that the website might offer only a tiny fraction of the functionality of the mobile app.
Does anyone know a sane way to monitor the API calls from an Android app so I can see the endpoints that aren't available on the web apps?
qnleigh · 1d ago
Do we know how these apps were able to track browser activity? The only clues I see in the article are that it was on a per-website basis, and that it worked in incognito mode.
I'm especially curious if Google shares any of the blame. Was this a known issue and they assumed no one would actually exploit it, or a subtle bug that only just got caught? Either way it's a huge security vulnerability.
gusfoo · 1d ago
> Do we know how these apps were able to track browser activity? The only clues I see in the article are that it was on a per-website basis, and that it worked in incognito mode.
The App listens on localhost:xxyyzz when backgrounded. You open your browser and go to onesite.com and then differentsite.com the ID you are known as on those two sites is transmitted by having the JS on each site that supports Facebook functionality / ads etc for that site, and runs in your browser, make a request for an asset on your localhost with args <your ID on that website>. The app gets the args, and sends it off to HQ. That ties your signed-in account on the app to your activity on all the websites that was using this. And to be clear, FB Pixel calls are tagged with the 'event' that you're doing like "checkout" "just looking" "donate" etc. While I don't know for sure, I'd assume that the fact you're in Incognito Mode is just an aspect of the data report, I would say. Nothing would stop it.
paxys · 1d ago
Using analytics scripts that website owners injected into the page. It's just that rather than uploading the collected data to facebook.com the script sent it to localhost:5678, and the Facebook app was listening on that port.
john01dav · 1d ago
Is this a violation of the computer fraud and abuse act? It seems to me like it might be because they're literally breaking out of a sandbox and viewing data from other apps. Other cases of that (like breaking out of VMs on a cloud provider) are clear violations. Sometimes people see violations of the law by a big corporation against people as less bad than when a single person does the same thing, but that's unreasonable -- if anything the former is more harmful due to potential for scale.
senkora · 1d ago
This question sent me down a Wikipedia tangent, where I learned a new term that I plan to start using, “Organi-cultural deviance”:
> Organi-cultural deviance is a recent philosophical model used in academia and corporate criminology that views corporate crime as a body of social, behavioral, and environmental processes leading to deviant acts.
> This reflects the view that corporate cultures may encourage or accept deviant behaviors that differ from what is normal or accepted in the broader society.
In other words, whether or not what Facebook is doing is a crime, they are doing it because their corporate culture fundamentally believes that it is okay and acceptable for them to do it, even though wider society doesn’t agree.
And that’s the justification for filing criminal charges against organizations. It is the culture of the organization that encourages criminal acts, so the organization itself should be charged, and dissolved (the corporate death penalty) if necessary.
paxys · 1d ago
I don't see how. Everything they are doing is explicitly allowed by the OS. If there is a violation it is under privacy and data collection laws.
ImPostingOnHN · 1d ago
Is it explicitly allowed by the user?
Every working system vulnerability is explicitly allowed by the code, or else it wouldn't work.
freejazz · 1d ago
> Everything they are doing is explicitly allowed by the OS
Do you mean that the OS provides for this possibility in a technical sense?
Or do you mean that the OS explicitly gave them permission to do this exact thing (and that permission was from the user?).
That something is technical possible isn't what makes something not a violation of the computer fraud and abuse act. Similarly, leaving my front door unlocked doesn't mean you aren't trespassing.
spencerflem · 1d ago
I'm with you fwiw. But I wouldn't hold my breath waiting for a conviction.
gleenn · 1d ago
Why is this de-facto the case for so many things? Is it a matter of finding a lawyer? Do these enforcement agencies just not care or don't have enough resources to battle Meta? It's exhausting that this gets repeated so often but for no apparent reason. I understand that the fine might not be large enough to stop them but it feels like all these things just get swept under the rug and gross violations of privacy which are against the law never get actioned. (Edit: forgot a word)
input_sh · 1d ago
FTC's yearly budget is under half a billion, companies can simply throw more lawyers at a problem until the next election. Rinse and repeat.
Case in point, FTC did start an antitrust lawsuit against Facebook in December 2020 and the trial finally started last month. Hence, Zuckerberg being at the inauguration, getting rid of fact-checkers etc.
rgreek42 · 1d ago
There's only one law in America: capital accumulation shall not be impeded.
MangoToupe · 1d ago
Nobody wants to kill the golden goose.
EDIT: well, that's probably not true, there are probably many people working for e.g. the FTC that would love to go after big companies. This is certainly true for the CFPB. But collectively, there's very little will from either party to do so. I mean, a large portion of congress holds Meta stock; some got rich off it.
jrs235 · 1d ago
This administration is more interested in using executive agencies to advance a political power consolidating agenda not protecting consumers and citizens from corporate abuse and villains of privacy. Mega corporations and government (US Executive branch) are getting in bed and partnering to exploit data to maintain and expand power and control. Where and when the US Constitution protects citizens and prevents the government from doing an action, the government will lean on mega corporations to enforce their policy in the private sector. Social credit, like communist China na, is coming to the US. While the GOP has been resurrecting McCarthyism, they have been stoking the fears to make them a reality. Projection at it's finest. The hypocrisy is mind blowing and the cognitive dissonance in those that way up the FUD extremely strong and empowering for the malicious "elites" using it.
sneak · 1d ago
Prosecutorial discretion is tremendous. This won't be enforced against Meta because Meta is huge and powerful. If you become huge and powerful, many of the laws stop applying to you, because prosecutors know it's frowned upon to attack the ownership class.
01HNNWZ0MV43FF · 1d ago
A Five Whys might be appropriate.
Why won't Meta be punished for this? Because they won't be convicted.
Why won't they be convicted? Because the court system is corrupt, trials are expensive, and Meta has more money to throw behind lawyers than anyone who cares to sue them.
Why is the court system corrupt? Because corruption is spreading through every aspect of the US government, as corruption, like black mold in a kitchen sink, tends to do when not actively fought.
Why are trials expensive? Because the law is complicated and there isn't much support for privacy, so this isn't open-and-shut like a murder case, it would be a long drag-out fight between the few people with little money who care about privacy, and a huge corporation powered by the fear and apathy and helplessness of billions of people.
Why does Meta have so much money? Because nobody stopped them when it would have been easy. Because corruption of democracy and erosion of privacy have been ongoing for decades, maybe forever.
Why isn't corruption being actively fought? The working class (Net worth under say 5 million) are on back foot against it. Nobody has time to vote, protest, organize, unionize, run for office, fix things, help the homeless, when you can't afford a damned thing, when a pregnancy is a career-ending problem, when you can't even get an abortion safely in many states, when you're one ambulance bill from a "not a debtor's prison" constructed debtor's prison.
Why isn't there much support for privacy? Same reasons, plus public education lags behind on this issue. Tech moved very fast the last few decades, and there's no public education for adults, so people don't really get sat down and told what the risk is of mass surveillance. The stories get written, sometimes, but not listened to. Nothing very serious gets done when you can always flip channels and see something funny, which is what Meta and TikTok sell to people. You can always flip channels and tune out, and it might take years to say to yourself "I think I have a crippling Internet addiction".
What's the actionable?
1. Always fight anyway. Get stuff off of Meta. Publish Own Site, Syndicate Elsewhere. Help friends get away. Do what you can.
2. Do things in physical reality.
3. Vote in every god-damned election you can vote in. Vote blue even if it's a shit sandwich, because most Americans are stuck under FPTP voting, and life is better under Democrats, do I need to explain the word "gradient" to a website full of startup hackers?
threecheese · 1d ago
It appears that a Meta app is communicating with a Meta client library and shipping Meta-sourced cookies, which from a Cookie Consent perspective should apply only to the first party hosting the JavaScript and using the cookie, and from a Meta Ads client is still true. So the Bad Actor argument is that Meta is collecting this data widely for many first parties (Meta customers), but wouldn’t they have this anyway? If they aren’t allowing two First Parties to know about each others user data, then aren’t they aligning with legal frameworks?
sneak · 1d ago
No, those sorts of laws don't apply to the ownership class, generally speaking.
ajross · 1d ago
I think it's sort of unclear if "breaking out of a sandbox" is what's happening. Obviously no one expects this to be happening. But the features are known, and mostly desired, things:
* Apps can open local TCP/UDP ports on the device to talk to each other. That's good, often. It's a valuable capability, just like access to shared storage is. And it doesn't leak any info the app doesn't want to leak.
* Apps can get a reasonably unique fingerprint for the device they're running on. Again, that's desired: backends want to know who they're talking to, and that it's not a MitM or hijacked account.
* Browsers can talk to those local servers too. Again, this seems useful to me, as long as native apps have capabilities that web apps lack, they should be able to offer them as extensions to the web API.
So basically we have a situation where Facebook's native app tells Facebook's web app (or rather Facebook's web code being distributed by Facebook's partner sites) what machine it's operating on (and maybe some more details about the history, like user account cookies, etc...), in contravention of the user's expectations about what they told the partner site to store about them.
It's that last bit that's the bad part. But it's not really about device security as I see it. Facebook broke the spirit of the cookie tracking rule, but not the letter.
abeyer · 1d ago
> Browsers can talk to those local servers too. Again, this seems useful to me, as long as native apps have capabilities that web apps lack, they should be able to offer them as extensions to the web API.
Not sure if I agree on that one... at a minimum I think that should be behind a user consent prompt.
ajross · 1d ago
Maybe, but it's worth pointing out that even that wouldn't prevent the exploit in question. Facebook didn't have to transfer this cookie locally, the app and web site could just as easily have exchanged it via some RESTful API server somewhere (well, on the same origin as the Facebook JS in question).
The reason they didn't do that is pure obscurity: if they blasted a tracking cookie over the internet they would have been caught faster. Trying to design "security" features by pushing the "obscurity" boundary around is usually wasted effort.
swiftcoder · 1d ago
I'm not sure I agree about any of these being desirable as specified.
* App-to-app network communication really only seems sane between apps from the same vendor.
* Unique fingerprinting of the device is explicitly not desirable. They should have access to a unique anonymous identifier for the device, which cannot be compared between apps from multiple vendors.
* Browsers talking to native apps again seems like something that should only be possible if both the web domain and the app belong to the same vendor. I don't have a huge problem with Facebook.com talking to the Facebook app - I do have a problem with random 3rd-party sites talking to the Facebook app.
Nextgrid · 1d ago
Malware also takes advantage of known and useful features. Being able to read and write your files (to steal or encrypt them), being able to open sockets (to connect to the C&C server) and so on. Which is why we generally look at the intention rather than how it was achieved technically.
tortilla · 1d ago
How would you explain this to the layperson or normie?
It’s like Meta giving every AirBnB host a free toaster as a gift — but secretly, the toaster has a hidden microphone and internet connection that listens in on every guest’s conversation, then beams that info back to Meta.
yencabulator · 14h ago
Your phone's web browser lets any website talk to the apps on your phone. The app knows who you are, and Facebook tracking is on practically all websites, so Zuck tracks you on all websites, even in Incognito mode.
m463 · 1d ago
this kind of stuff is everywhere.
ios does the same thing. when you install an app, they allow deep linking of their urls.
for example, if you install the amazon app, any amazon link loaded on your phone can be intercepted by it (messages, mail, browser, etc)
I think the same kinds of things can be done with location services. A store app can do fine-grained bluetooth location with ibeacons in their store.
I don't know the state-of-the-art in cross-application tracking. I'm pretty sure sdks added to multiple apps can do the same sort of thing.
At some point a number of years ago, i just stopped installing apps.
iammrpayments · 1d ago
I have no idea how that meta is so successful, managing ads in their business dashboard is such a painful experience that I gave up testing new ads after a while. They also keep trying to push features designed to make you spend more money once your ad is running and their “representatives” will keep calling you with “strategies” but 99% of the time they have 0 idea on how it works. If your account gets banned good luck finding a real human being who can solve your issue.
pier25 · 1d ago
A couple of months ago I spent like $25 in a campaign for a small product I launched. It translated into literally zero real traffic let alone sales.
I got likes from what looked like bot accounts from random countries (Kazahstan etc) even when I was quite certain I had limited the ads to certain countries. Then I was spammed with scams on the ads dashboard who appeared out of nowhere.
Absolute waste of money.
GiorgioG · 1d ago
I deleted Facebook and Instagram from my iPhone two weeks ago. I'm done getting ads about something my wife and I were discussing just a little while ago (verbal conversation.) That's not targeting, that's an invasion of privacy. Fuck you Meta.
ohlookcake · 1d ago
"I'm getting Facebook ads for something we were just talking about verbally so it must be listening in" has been debunked time and again. Any combination of other browsing based inference, baader meinhof, and pure coincidence would be at play
>Google, which owns the Android operating system, confirmed the covert activity to Sky News.
>It said Meta and Yandex used Android's capabilities "in unintended ways that blatantly violate our security and privacy principles".
Did Google immediately remove these apps with blatant security and privacy violations from their app store?
I wish there was a way to prevent an app from running in the background.
SoftTalker · 1d ago
> I wish there was a way to prevent an app from running in the background.
Uninstall the app.
Johnny555 · 1d ago
That only works for apps I don't need or want. But there are some apps on my phone that I want, but I don't really need them to be active in the background only when I'm using them.
sherdil2022 · 1d ago
Why do companies still risk doing these kinds of things anymore?
Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery? Aren't employees saying anything against these features? What culture are they building? What leadership are they demonstrating? It can't all be about money by any cost damn the users and their privacy/rights, right?
i80and · 1d ago
> Aren't employees saying anything against these features?
Likely, but it's just demoralized workplace grousing. Tech employees are statistically sycophantic, and the exceptions get burned out or tossed out by executives who say things like "why don't they just shut up and work"
burningChrome · 1d ago
Companies also have their executives do things like tell you that they only use data ethically and responsibly, then turn around and sell it to the highest bidder.
A lot of times unless you're on a team or in a department that's doing this stuff for nefarious purposes, you wouldn't know.
Anecdotal evidence:
I worked in RPA (robotic process automation) for a large software company. We were tasked with automating a decision process. The decision process was completely benign. Something like a payments process. Pretty easy. We finished it in a few months start to finish.
We hand it off, it goes into production. A lot of back slapping and high fives happen. I go on with my work, move out of the company a few months later. About a year later, a guy I worked with at the company emails me a news story about a company using the RPA program we built to auto deny insurance claims for their clients. Massive class action lawsuit.
The insurance company had taken our script and just re-purposed it to auto deny a certain percentage of claims. I was shocked and dismayed that someone would use the stuff we built for their own shady business practices. It was a huge wakeup call that even when you're building something completely benign, a company can pay millions for it and reuse it to do bad things to people and you'll never know until its too late.
SoftTalker · 1d ago
And people have done way worse than this, at the behest of their leaders, without pushback. Most people are followers.
djaychela · 1d ago
> Most people are followers.
And the tech industry leaders are mostly spoiled, entitled mill/billionaires who eject anyone who crosses them. A team of moral imbeciles.
Society is screwed until Zuckerberg, Bezos, Musk et al are stripped of their power and wealth.
Which will never happen.
neepi · 1d ago
It’s because they are assholes and think they can get away with it. There’s no other reason.
Now imagine what would happen without any regulation at all…
SoftTalker · 1d ago
Businesses move faster than regulation. Not that we give up on regulation, but the problem is deeper. As John Adams wrote: We have no Government armed with Power capable of contending with human Passions unbridled by morality and Religion. Avarice, Ambition, Revenge or Gallantry, would break the strongest Cords of our Constitution as a Whale goes through a Net. Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
yencabulator · 14h ago
Nah that's just poor regulation. Try telling a Nordic judge that what you did was technically not illegal, you'll find you're still liable. It's all about intent and spirit of the law vs letter of the law.
yua_mikami · 1d ago
Sadly they do get away with it.
dylan604 · 1d ago
There is no thinking. They do get away with it. Why are you trying to tap dance around this?
neepi · 1d ago
Well I have some hope in time that they stop getting away with it and are punished for getting away with it for a long time.
dylan604 · 1d ago
either I'm a jaded summabitch or you must be new. there is no evidence your hope is warranted in the vast historical evidence of evilCorp getting away with it
No comments yet
wyldberry · 1d ago
The more well compensated someone is, the less likely they are going to be to speak out. The employees are more than comfortable.
Also, the incentive structure for reward (stock price etc) is predicated by squeezing every last bit of monetization out. They already know ahead of time how much money a new feature could bring in, and the potential litigation cost before hand. If revenue > litigation/fine expense, they are going to do it.
joshstrange · 1d ago
> Why do companies still risk doing these kinds of things anymore?
Money
> Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
At worst they get some bad PR but most people aren't listening to security researchers (or don't care). Companies aren't being held "responsible" for anything, maybe a small fine but that's just the cost of doing business.
> And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery?
Yes, probably all these things exist. See also Apple talking about how to make linking-out sound as scary as possible. There are chats with employees brainstorming how to make it scarier to end-users to drive them back into the IAP flow.
> Aren't employees saying anything against these features?
Some are afraid of losing their job or getting branded as a non-team-player. Some don't care. It can be easy to get lost in "implement the spec" instead of "is what I'm building morally ok?"
> What culture are they building?
A bad one.
> What leadership are they demonstrating?
That they only care about money and what they can get away with. Even if they get caught the profits outstrip the fine by a large margin. "It's just numbers/business".
> It can't all be about money by any cost damn the users and their privacy/rights, right?
It is.
amelius · 1d ago
At this point I want to see some physical wrist slapping. Select some wronged user and invite them to slap Zuck's wrists with a wooden stick and put it on YouTube. This will be more satisfactory than the fines which are just the cost of doing business.
neepi · 1d ago
I'd rather see him lose all his money and protection and be as vulnerable as his users are.
amelius · 1d ago
Force him to add a rotten tomato to his brand logos so users can see what they are dealing with. Or take away his trademarks altogether.
Brands are about trust, so it makes sense.
Hilift · 1d ago
$70 billion per year net profit. Basically printing money at this point.
bobek · 1d ago
Well, it is the same as all those examples of people "just doing their jobs" in concentration camps. Being compliant, focused on execution of their ~orders~ process. Interestingly, many people are being pre-emptively harsher, than required/requested. Checkout "Those Who Said No" (DOI 10.2307/1429971).
svachalek · 1d ago
It's been clear for a long time what Meta is about. Anyone working there has long ago compromised any feelings they have for it in exchange for a paycheck. And it seems sucking up to whatever political powers are winning today has served to keep them out of governmental interference.
add-sub-mul-div · 1d ago
Can we not treat it as axiomatic by now that there is little to no risk of accountability from either the law or from undiscerning end users?
altcognito · 1d ago
Frankly because if we had real consequences, Mark Zuckerberg would have been in jail for "accidentally" letting Cambridge Analytica scrape their data "unknowingly". Or he would have faced consequences at Harvard for scraping user data. He's obviously not the only one, but absolutely one of the higher profile cases.
Covert web-to-app tracking via localhost on Android - https://news.ycombinator.com/item?id=44169115 - June 2025 (308 comments)
Meta pauses mobile port tracking tech on Android after researchers cry foul - https://news.ycombinator.com/item?id=44175940 - June 2025 (26 comments)
I tracked it down to an app he had called "7 Min Workouts" which was spamming the ads. Wild.
We found this out (I was the first to recreate / prove it) when testing the COVID contact tracing apps in NL, at the time Google were logging the seeds to the main system log. That allowed anyone with access to said logs to build a real-time map of ever Android user in the world who had the GAEN framework installed.
EDIT:
Here's the press release in English covering the app shutdown:
https://nltimes.nl/2021/04/29/coronamelder-app-taken-offline...
Here's a paper detailing Facebook's access infecting systems with no Facebook installed:
https://arxiv.org/pdf/1905.02713
https://nltimes.nl/2021/04/29/coronamelder-app-taken-offline...
Here's the paper showing the Facebook had access to the logging and whole range of very suspect permissions:
https://arxiv.org/pdf/1905.02713
Even some (most?) off the shelf cookie consent libraries don't handle the turning the actual cookies off.
if (true) { trackUser(); }
Does anyone know a sane way to monitor the API calls from an Android app so I can see the endpoints that aren't available on the web apps?
I'm especially curious if Google shares any of the blame. Was this a known issue and they assumed no one would actually exploit it, or a subtle bug that only just got caught? Either way it's a huge security vulnerability.
The App listens on localhost:xxyyzz when backgrounded. You open your browser and go to onesite.com and then differentsite.com the ID you are known as on those two sites is transmitted by having the JS on each site that supports Facebook functionality / ads etc for that site, and runs in your browser, make a request for an asset on your localhost with args <your ID on that website>. The app gets the args, and sends it off to HQ. That ties your signed-in account on the app to your activity on all the websites that was using this. And to be clear, FB Pixel calls are tagged with the 'event' that you're doing like "checkout" "just looking" "donate" etc. While I don't know for sure, I'd assume that the fact you're in Incognito Mode is just an aspect of the data report, I would say. Nothing would stop it.
> Organi-cultural deviance is a recent philosophical model used in academia and corporate criminology that views corporate crime as a body of social, behavioral, and environmental processes leading to deviant acts.
> This reflects the view that corporate cultures may encourage or accept deviant behaviors that differ from what is normal or accepted in the broader society.
https://en.m.wikipedia.org/wiki/Corporate_crime
In other words, whether or not what Facebook is doing is a crime, they are doing it because their corporate culture fundamentally believes that it is okay and acceptable for them to do it, even though wider society doesn’t agree.
And that’s the justification for filing criminal charges against organizations. It is the culture of the organization that encourages criminal acts, so the organization itself should be charged, and dissolved (the corporate death penalty) if necessary.
Every working system vulnerability is explicitly allowed by the code, or else it wouldn't work.
Do you mean that the OS provides for this possibility in a technical sense?
Or do you mean that the OS explicitly gave them permission to do this exact thing (and that permission was from the user?).
That something is technical possible isn't what makes something not a violation of the computer fraud and abuse act. Similarly, leaving my front door unlocked doesn't mean you aren't trespassing.
Case in point, FTC did start an antitrust lawsuit against Facebook in December 2020 and the trial finally started last month. Hence, Zuckerberg being at the inauguration, getting rid of fact-checkers etc.
EDIT: well, that's probably not true, there are probably many people working for e.g. the FTC that would love to go after big companies. This is certainly true for the CFPB. But collectively, there's very little will from either party to do so. I mean, a large portion of congress holds Meta stock; some got rich off it.
Why won't Meta be punished for this? Because they won't be convicted.
Why won't they be convicted? Because the court system is corrupt, trials are expensive, and Meta has more money to throw behind lawyers than anyone who cares to sue them.
Why is the court system corrupt? Because corruption is spreading through every aspect of the US government, as corruption, like black mold in a kitchen sink, tends to do when not actively fought.
Why are trials expensive? Because the law is complicated and there isn't much support for privacy, so this isn't open-and-shut like a murder case, it would be a long drag-out fight between the few people with little money who care about privacy, and a huge corporation powered by the fear and apathy and helplessness of billions of people.
Why does Meta have so much money? Because nobody stopped them when it would have been easy. Because corruption of democracy and erosion of privacy have been ongoing for decades, maybe forever.
Why isn't corruption being actively fought? The working class (Net worth under say 5 million) are on back foot against it. Nobody has time to vote, protest, organize, unionize, run for office, fix things, help the homeless, when you can't afford a damned thing, when a pregnancy is a career-ending problem, when you can't even get an abortion safely in many states, when you're one ambulance bill from a "not a debtor's prison" constructed debtor's prison.
Why isn't there much support for privacy? Same reasons, plus public education lags behind on this issue. Tech moved very fast the last few decades, and there's no public education for adults, so people don't really get sat down and told what the risk is of mass surveillance. The stories get written, sometimes, but not listened to. Nothing very serious gets done when you can always flip channels and see something funny, which is what Meta and TikTok sell to people. You can always flip channels and tune out, and it might take years to say to yourself "I think I have a crippling Internet addiction".
What's the actionable?
1. Always fight anyway. Get stuff off of Meta. Publish Own Site, Syndicate Elsewhere. Help friends get away. Do what you can.
2. Do things in physical reality.
3. Vote in every god-damned election you can vote in. Vote blue even if it's a shit sandwich, because most Americans are stuck under FPTP voting, and life is better under Democrats, do I need to explain the word "gradient" to a website full of startup hackers?
* Apps can open local TCP/UDP ports on the device to talk to each other. That's good, often. It's a valuable capability, just like access to shared storage is. And it doesn't leak any info the app doesn't want to leak.
* Apps can get a reasonably unique fingerprint for the device they're running on. Again, that's desired: backends want to know who they're talking to, and that it's not a MitM or hijacked account.
* Browsers can talk to those local servers too. Again, this seems useful to me, as long as native apps have capabilities that web apps lack, they should be able to offer them as extensions to the web API.
So basically we have a situation where Facebook's native app tells Facebook's web app (or rather Facebook's web code being distributed by Facebook's partner sites) what machine it's operating on (and maybe some more details about the history, like user account cookies, etc...), in contravention of the user's expectations about what they told the partner site to store about them.
It's that last bit that's the bad part. But it's not really about device security as I see it. Facebook broke the spirit of the cookie tracking rule, but not the letter.
Not sure if I agree on that one... at a minimum I think that should be behind a user consent prompt.
The reason they didn't do that is pure obscurity: if they blasted a tracking cookie over the internet they would have been caught faster. Trying to design "security" features by pushing the "obscurity" boundary around is usually wasted effort.
* App-to-app network communication really only seems sane between apps from the same vendor.
* Unique fingerprinting of the device is explicitly not desirable. They should have access to a unique anonymous identifier for the device, which cannot be compared between apps from multiple vendors.
* Browsers talking to native apps again seems like something that should only be possible if both the web domain and the app belong to the same vendor. I don't have a huge problem with Facebook.com talking to the Facebook app - I do have a problem with random 3rd-party sites talking to the Facebook app.
It’s like Meta giving every AirBnB host a free toaster as a gift — but secretly, the toaster has a hidden microphone and internet connection that listens in on every guest’s conversation, then beams that info back to Meta.
ios does the same thing. when you install an app, they allow deep linking of their urls.
for example, if you install the amazon app, any amazon link loaded on your phone can be intercepted by it (messages, mail, browser, etc)
I think the same kinds of things can be done with location services. A store app can do fine-grained bluetooth location with ibeacons in their store.
I don't know the state-of-the-art in cross-application tracking. I'm pretty sure sdks added to multiple apps can do the same sort of thing.
At some point a number of years ago, i just stopped installing apps.
I got likes from what looked like bot accounts from random countries (Kazahstan etc) even when I was quite certain I had limited the ads to certain countries. Then I was spammed with scams on the ads dashboard who appeared out of nowhere.
Absolute waste of money.
https://english.elpais.com/technology/2025-06-03/the-covert-...
https://localmess.github.io/
https://news.ycombinator.com/item?id=44175940
>It said Meta and Yandex used Android's capabilities "in unintended ways that blatantly violate our security and privacy principles".
Did Google immediately remove these apps with blatant security and privacy violations from their app store?
I wish there was a way to prevent an app from running in the background.
Uninstall the app.
Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery? Aren't employees saying anything against these features? What culture are they building? What leadership are they demonstrating? It can't all be about money by any cost damn the users and their privacy/rights, right?
Likely, but it's just demoralized workplace grousing. Tech employees are statistically sycophantic, and the exceptions get burned out or tossed out by executives who say things like "why don't they just shut up and work"
A lot of times unless you're on a team or in a department that's doing this stuff for nefarious purposes, you wouldn't know.
Anecdotal evidence:
I worked in RPA (robotic process automation) for a large software company. We were tasked with automating a decision process. The decision process was completely benign. Something like a payments process. Pretty easy. We finished it in a few months start to finish.
We hand it off, it goes into production. A lot of back slapping and high fives happen. I go on with my work, move out of the company a few months later. About a year later, a guy I worked with at the company emails me a news story about a company using the RPA program we built to auto deny insurance claims for their clients. Massive class action lawsuit.
The insurance company had taken our script and just re-purposed it to auto deny a certain percentage of claims. I was shocked and dismayed that someone would use the stuff we built for their own shady business practices. It was a huge wakeup call that even when you're building something completely benign, a company can pay millions for it and reuse it to do bad things to people and you'll never know until its too late.
And the tech industry leaders are mostly spoiled, entitled mill/billionaires who eject anyone who crosses them. A team of moral imbeciles.
Society is screwed until Zuckerberg, Bezos, Musk et al are stripped of their power and wealth.
Which will never happen.
Now imagine what would happen without any regulation at all…
No comments yet
Also, the incentive structure for reward (stock price etc) is predicated by squeezing every last bit of monetization out. They already know ahead of time how much money a new feature could bring in, and the potential litigation cost before hand. If revenue > litigation/fine expense, they are going to do it.
Money
> Externally we have some amazing security researchers who look out and dig these things out - and try to hold the companies responsible.
At worst they get some bad PR but most people aren't listening to security researchers (or don't care). Companies aren't being held "responsible" for anything, maybe a small fine but that's just the cost of doing business.
> And what is the internal process? Wouldn't these intrusive and privacy violating features (to track users for ex) be captured in design docs, emails, chats, code changes/PRs - and up for discovery?
Yes, probably all these things exist. See also Apple talking about how to make linking-out sound as scary as possible. There are chats with employees brainstorming how to make it scarier to end-users to drive them back into the IAP flow.
> Aren't employees saying anything against these features?
Some are afraid of losing their job or getting branded as a non-team-player. Some don't care. It can be easy to get lost in "implement the spec" instead of "is what I'm building morally ok?"
> What culture are they building?
A bad one.
> What leadership are they demonstrating?
That they only care about money and what they can get away with. Even if they get caught the profits outstrip the fine by a large margin. "It's just numbers/business".
> It can't all be about money by any cost damn the users and their privacy/rights, right?
It is.
Brands are about trust, so it makes sense.
https://en.wikipedia.org/wiki/Obedience_to_Authority:_An_Exp...
I recommend reading the book, it has a ton of useful stuff in it beyond what everyone knows about the Milgram experiment.
As far as I can tell, that's the mandate.
people are usually motivated by more and faster
this means whatever opportunities present themselves for short-term more and faster will have no shortage of takers
TL;DR myopic greed
Companies are something else beyond people, which has a mind of its own. And leadership selects for sociopathy.
It's really not that complicated, if I squint hard enough it looks like an obvious tragedy of the commons variant.
No comments yet
https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...