Ask HN: Why isn't capability-based security more common?
Capability-based security might offer an alternative: software should not have access to things when it's not explicitly provided with access. I.e. "classic" desktop security is kind of a blacklist model (everything is possible unless explicitly restricted e.g. via sandbox) while capbility-based security is like a whitelist.
On a programming language level it's usually known as object-capability model, and there's a number of programming languages which implement it: https://en.m.wikipedia.org/wiki/Object-capability_model
The question: why isn't it more popular? It doesn't even seem to be widely known, let alone used. (Aside from isolated examples.)
Is there any chance it would be widely adopted?
I guess one objection is that people don't want to manually configure security. But perhaps it can be integrated into normal UX if we really think about it: e.g. if you select a file using a system-provided file picker it would automatically grant access to that file, as access is explicitly authorized.
https://www.spritely.institute/
https://files.spritely.institute/papers/spritely-core.html
Let’s use Apple as an example, as they tend to do major transitions on a regular basis.
So, let’s say that the top tier already approved the new security mode(l).
Now, how to do it?
My understanding is that most if not all APIs would have to be changed or replaced. So that's pretty much a new OS, that needs new apps (if the APIs change, you cannot simply recompile the apps).
Now, if you expose the existing APIs to the new OS/apps, then what's the gain?
And if you don't expose them, then you basically need a VM. I mean, I don’t know Darwin syscalls, but I suspect you might need new syscalls as well.
And so you end up with a brand new OS that lives in a VM and has no apps. So it's likely order(s?) of magnitude more profitable to just harden the existing platforms.
It also requires rewriting all your apps.
It also might require hardware support to not be significantly slower.
"Just sandbox each app" has much fewer barriers to entry, so people have been doing that instead.
And systems like Android have been working with discrete permissions / capabilities, because they were able to start from scratch in a lot of ways, and didn't need to be compatible with 50 years of applications.
Default-Accept philosophy make it easier for millions of holes to open up ag first and you spend entire IT budget locking down things you don't need but not the ones you don't see that needs closing.
Default-deny is one time IT expenditure. And you start poking holes to let things thru. If that hole is dirty, you plainly see that dirty hole and plug it.
All that also equally applies to CPU designers.
Or the Linux model of prefixing every command with a "sudo".
It doesn't work.
Back in 70s and 80s computers didn't contain valuable information to care about and there was no Internet to transmit such information. So, adding some sort of security elements in operating systems had no sense. In these years modern operating system were first developed - Unix, Dos, Windows. Since then many architectural decisions of these operating systems weren't revised in order to avoid breaking backward-compatibility. Even if we need to break it to achieve better security, no one is ready to make such sacrifice.
There are projects of operating systems with focus on security, which are not just Unix-like systems or Windows clones. But they can't replace existing operating systems because of network effects (it's unpractical to use a system nobody else uses).