Ask HN: What "developer holy war" have you flip-flopped on?
10 points by meowface 1d ago 30 comments
Ask HN: How do you connect with other founders in your city?
5 points by leonagano 1d ago 2 comments
A privacy VPN you can verify
104 MagicalTux 94 8/15/2025, 10:34:17 PM vp.net ↗
You also have to trust that SGX isn't compromised.
But even without that, you can log what goes into SGX and what comes out of SGX. That seems pretty important, given that the packets flowing in and out need to be internet-routable and necessarily have IP headers. Their ISP could log the traffic, even if they don't.
> Packet Buffering and Timing Protection: A 10ms flush interval batches packets together for temporal obfuscation
That's something, I guess. I don't think 10ms worth of timing obfuscation gets you very much though.
> This temporal obfuscation prevents timing correlation attacks
This is a false statement. It makes correlation harder but correlation is a statistical process. The correlation is still there.
(latter quotes are from their github readme https://github.com/vpdotnet/vpnetd-sgx )
All that said, it is better to use SGX than to not use SGX, and it is better to use timing obfuscation than to not. Just don't let the marketing hype get ahead of the security properties!
Yeah, I took one look at that and laughed. CEO of mt gox teaming up with the guy who sold his last VPN to an Israeli spyware company sounds like the start of a joke.
Also, the README is full of AI slop buzzwords, which isn’t confidence-inspiring.
The post says build the repo and get the fingerprint, which is fine. Then it says compare it to the fingerprint that vp.net reports.
My question is: how do I verify the server is reporting the fingerprint of the actual running code, and not just returning the (publicly available) fingerprint that we get result of building the code in the first place?
In order for a malicious instance to use the same public key as an attested one, they’d have to share the private key (for decryption to work). If you can verify that the SGX code never leaks the private key that was generated inside the enclave, then you can be reasonably sure that the private key can’t be shared to other servers or WG instances.
Since this was answered already, I'll just say that I think the bigger problem is that we can't know if the machine that replied with the fingerprint from this code is even related to the one currently serving your requests.
Personally I like the direction Mullvad went instead. I get that it means we really can't verify Mullvad's claims, but even in the event they're lying, at least we got some cool Coreboot ports out of it.
If you're really paranoid, neither this service nor Mullvad offers that much assurance. I like the idea of verifiability, but I believe the type of people who want it are looking to satisfy deeper paranoia than can be answered with just trusting Intel... Still, more VPN options that try to take privacy claims seriously is nothing to complain about.
A lot of people have been attempting to attack SGX, and while there have been some successful attacks these have been addressed by Intel and resolved. Intel will not attest any insecure configuration as do other TEE vendors (AMD SEV, ARM Trustzone, etc).
That's a pretty big trust already. Intel has much to loose and would have no problem covering up bugs for government in SGX or certifying government-malware.
And intel had a LOT of successfull attacks and even with their cpu they are known to prefer speed than security.
This allows generating a self signed TLS certificate that includes the attestation (under OID 1.3.6.1.4.1.311.105.1) and a client connecting verifying the TLS certificate not via the standard chain of trust, but by reading the attestion, verifying the attestation itself is valid (properly signed, matching measured values, etc) and verifying the containing TLS certificate is indeed signed with the attested key.
Intel includes a number of details inside the attestation, the most important being intel's own signature of the attestation and chain of trust to their CA.
As far as I'm aware, no. Any network protocol can be spoofed, with varying degrees of difficulty.
I would love to be wrong.
The final signature comes in the form of a x509 cerificate signed with ECDSA.
What's more important to me is that SGX still has a lot of security researchers attempting (and currently failing) to break it further.
Again, I would love to know if I'm wrong.
The fact that no publicly disclosed threat actor has been identified says nothing.
Are you suggesting a solution for this situation?
Because of the cryptographic verifications, the communication cannot be spoofed.
The trust chain ends with you trusting Intel to only make CPUs that do what they say they do, so that if the code doesn't say to clone a private key, it won't.
(You also have to trust the owners to not correlate your traffic from outside the enclave, which is the same as every VPN, so this adds nothing)
I wanted to give your product a try, but the gap between the 1-month and 2-year plans is so big that a single month feels like a rip-off, while I’m not ready to commit to 2 years either.
On payments: for a privacy-focused product, Monero isn’t just a luxury, it’s a must (at least for me). A VPN that doesn’t accept Monero forces users into surveillance finance, since card and bank payments are legally preserved forever by processors. That means even if the VPN “keeps no logs,” the payment trail still ties your real identity to the service.
In today's internet you just cannot have exit IP which is not tied either into your identity, payment information or physical location. And don't even mention TOR, pls.
There are cryptocurrencies like ZCash, Monero, Zano, Freedom Dollar, etc. that are sufficiently private.
There's also an option to just mail them cash, but some countries may seize all mailed cash if discovered.
What's your issue with tor?
I imagine those websites block IP ranges of popular VPN providers.
Am I right in thinking that hosting my own VPN would resolve this issue?
I pay approximately 50¢/month for such a setup, and you can probably do it for free forever if you decide to be slightly abusive about it. However, be aware that you don’t really gain any real privacy since you’re effectively just changing your IP address; a real VPN provides privacy by mixing your traffic with that of a bunch of other clients.
Some services will also block cloud ranges to prevent e.g. geoblock evasion, although you’ll see a lot less blocking compared to a real VPN service or Tor.
Honestly, I feel more comfortable using Mullvad. This team has some folks with questionable backgrounds and I wouldn't trust Intel. Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
Relying on "trust" in a security/privacy architecture isn't the right way to do things - which is what this solves. It removes the need to trust in a person or person(s) in most VPN company cases since they have many employees, and moves it to trusting in code.
> Also VPN providers are usually in non-us countries due to things like the Lavabit, Yahoo incidents and the Snowden revelations.
The system is designed so that any change server side will be immediately noticed by clients/users. As a result, these issues are sufficiently mitigated, and instead, allows people to take advantage of strong consumer, and personal, protection laws in the US.
And worse, it is harder for the American government to eavesdrop on US soil than it is outside America.
Of course, if a national spying apparatus is after you, regardless of the nation, pretty good chance jurisdiction doesn’t matter.
The GP mentioned Snowden and yet you say this. What material and significant changes have happened since 2013 to make this claim?
Old copy? Might need an update.
The idea itself is sound: if there are no SGX bypasses (hardware keys dumped, enclaves violated, CPU bugs exploited, etc.), and the SGX code is sound (doesn't leak the private keys by writing them to any non-confidential storage, isn't vulnerable to timing-based attacks, etc.), and you get a valid, up-to-date attestation containing the public key that you're encrypting your traffic with plus a hash of a trustworthy version of the SGX code, then you can trust that your traffic is indeed being decrypted inside an SGX enclave which has exclusive access to the private key.
Obviously, that's a lot of conditions. Happily, you can largely verify those conditions given what's provided here; you can check that the attestation points to a CPU and configuration new enough to not have any (known) SGX breaks; you can check that the SGX code is sound and builds to the provided hash (exercise left to the reader); and you can check the attestation itself as it is signed with hardware keys that chain up to an Intel root-of-trust.
However! An SGX enclave cannot interface with the system beyond simple shared memory input/output. In particular, an SGX enclave is not (and cannot be) responsible for socket communication; that must be handled by an OS that lies outside the SGX TCB (Trusted Computing Base). For typical SGX use-cases, this is OK; the data is what is secret, and the socket destinations are not.
For a VPN, this is not true! The OS can happily log anything it wants! There's nothing stopping it from logging all the data going into and out of the SGX enclave and performing traffic correlation. Even with traffic mixing, there's nothing stopping the operators from sticking a single user onto their own, dedicated SGX enclave which is closely monitored; traffic mixing means nothing if its just a single user's traffic being mixed.
So, while the use of SGX here is a nice nod to privacy, at the end of the day, you still have to decide whether to trust the operators, and you still cannot verify in an end-to-end way whether the service is truly private.
The US government also likely learned a lesson from early attempts at backdoors (RSA, etc) that this kind of things do not stay hidden and do not reflect well.
We've thought about this long and hard and planning to mitigate this as much as possible. Meanwhile we still offer something that is a huge step forward compared to what is standard in the industry.
No comments yet
Servers that don't log and can't without hard drives, ports physically glued shut.
https://www.ovpn.com/en/security
Answer: no you can’t, you still have to trust them. At the end of the day, you always just have to trust the provider, somewhere.
This guarantees that your traffic isn't being linked to you, and is mixed up with others in a way that makes it difficult for someone to attribute it to you, as long as you also protect yourself on the application side (clear cookies, no tracking browser extension, etc)
What would prevent you (or someone who has gained access to your infrastructure) from routing each connection to a unique instance of the server software and tracking what traffic goes in/out of each instance?
A packet goes in to your server and a packet goes out of your server: the code managing the enclave can just track this (and someone not even on the same server can figure this out almost perfectly just by timing analysis). What are you, thereby, actually mixing up in the middle?
You can add some kind of probably-small (as otherwise TCP will start to collapse) delay, but that doesn't really help as people are sending a lot of packets from their one source to the same destination, so the delay you add is going to be over some distribution that I can statistics out.
You can add a ton of cover traffic to the server, but each interesting output packet is still going to be able to be correlated with one input packet, and the extra input packets aren't really going to change that. I'd want to see lots of statistics showing you actually obfuscated something real.
The only thing you can trivially do is prove that you don't know which valid paying user is sending you the packets (which is also something that one could think might be of value even if you did have a separate copy of the server running for every user that connected, as it hides something from you)...
...but, SGX is, frankly, a dumb way to do that, as we have ways to do that that are actually cryptographically secure -- aka, blinded tokens (the mechanism used in Privacy Pass for IP reputation and Brave for its ad rewards) -- instead of relying on SGX (which not only is, at best, something we have to trust Intel on, but something which is routinely broken).
This whole process is already widely used in financial and automotive sectors to ensure servers are indeed running what they claim to be running, and well documented.
In all seriousness, I don’t even trust intel to start with.
They could run one secure enclave runningng the legit version of code and one insecure hardware running insecure software.
Then they put a load balancer in front of both.
When people ask for the attestation the LB sends traffic to the secure enclave, so you get the attestation back and all seems good.
When people send vpn traffic the loadbalancer sends them to the insecure hardware with insecure software.
So sgx proves nothing..
They are proving that they are the ones hosting the VPN server - not some server that stole their software and are running a honeypot and that the hosting company has not tampered with it.
So in the end you still have to trust the company that they are not sharing the certificates with 3rd parties.
lol
SGX has been broken time and again
SGX has 0-day exploits live in the wild as we speak
so... valiant attempt in terms of your product... but utterly unsuitable foundation
What this tells me however is there are a lot of people trying to attack SGX still today, and Intel has improved their response a lot.
The main issue with SGX was that its initial designed use for client-side DRM was flawed by the fact you can't expect normal people to update their BIOS (meaning downloading update, putting it on a storage device, rebooting, going into BIOS, updating, etc) each time an update is pushed (and adoption wasn't good enough for that), it is however having a lot of use server-side for finance, auto industry and others.
We are also planning to support other TEE in the future, SGX is the most well known and battle tested today, with a lot of support by software like openenclave, making it a good initial target.
If you do know of any 0-day exploit currently live on SGX, please give me more details, and if it's something not yet published please contact us directly at security@vp.net
Once they are leaked, there is no going back for that secret seed - i.e. that physical CPU. And this attack is entirely offline, so Intel doesn't know which CPUs have had their seeds leaked.
In other words, every time there is a vulnerability like this, no CPU affected can ever be trusted again for attestation purposes. That is rather impractical - so I'd consider even if you trust Intel (unlikely if you consider a government that can coerce Intel to be part of your threat model), SGX provides rather a weak guarantee against well-resourced adversaries (such as the US government).
https://en.wikipedia.org/wiki/Software_Guard_Extensions
The way this works is the enclave on launch generates a ECDSA key (which only exists inside the enclave and is never stored or transmitted outside). It then passes it to SGX for attestation. SGX generates a payload (the attestation) which itself contains the enclave measured hash (MRENCLAVE) and other details about the CPU (microcode, BIOS, etc). The whole thing has a signature and a stamped certificate that is issued by Intel to the CPU (the CPU and Intel have an exchange at system startup and from times to times where Intel verifies the CPU security, ensures everything is up to date and gives the CPU a certificate).
Upon connection we extract the attestation from the TLS certificate and verify it (does MRENCLAVE match, is it signed by intel, is the certificate expired, etc) and also of course verify the TLS certificate itself matches the attested public key.
Unless TLS itself is broken or someone manages to exfiltrate the private key from the enclave (which should be impossible unless SGX is broken, but then Intel is supposed to not certify the CPU anymore) the connection is guaranteed to be with a host running the software in question.
a host... not necessarily the one actually serving your request at the moment, and doesn't prove that it's the only machine touching that data. And afaik this only proves the data in the enclave matches a key, and has nothing to do with "connections".
Served by an enclave, but there's no guarantee it's the one actually handling your VPN requests at that moment, right?
And even if it was, my understanding is this still wouldn't prevent other network-level devices from monitoring/logging traffic before/after it hits the VPN server.
Saying "we don't log" doesn't mean someone else isn't logging at the network level.
I think SGX also wouldn't protect against kernel-level request logging such as via eBPF.
Okay, maybe I'm being thick, but... when I get a response from your server, how do I know it's actually running inside the enclave, and not an ordinary process sending a hardcoded expected fingerprint?
When the connection is established we verify the whole certificate chain up to Intel, and we verify the TLS connection itself is part of the attestation (public key is attested).
It's signed by Intel and thus, guaranteed to come from the enclave!
I always found confidential compute to be only good to isolate the hosting company from risk - not the customer!