>Verifiable Zero Access: Start! – We’re building an internal roadmap to create a transparent and verifiable infrastructure, in which no one, including ourselves, is permitted access to the servers through which VPN traffic flows. We will keep you abreast of all progress, and moreover, this will be a community-led effort. Verifiable Zero Access proves that we cannot log or monitor your traffic.
Is this going to be "nobody can access it because we locked ourselves out (trust us)", or some sort of trusted computing solution that's cryptographically verifiable?
I've built a system like this recently for a payments platform. Access _is_ possible but requires rebuilding the environment (and thus blowing everything away) as well as admin access.
Is it possible to verify that you cannot access said system, though? How would that even be done? In most scenarios I can imagine you're still rely on the server telling you something about itself... which it can lie about.
I set this up to force myself to stop SSHing to boxes all the time and trust in my own automation. It took some effort and setup is frustrating but it ended up being a net positive.
I baked everything with Ansible and did last touch setup with user-data, and deployed it all with Terraform.
That depends upon the key management. Even with default encryption, keymanagement facilities available for EBS, S3, RDS amazon can be locked out; the key resides with the owner.
EC2 sends DEK(Data Encryption Key) from the volume Metadata to the KMS (Key Management Service), KMS decrypts the DEK with CMS(Customer Master Keys), EC2 stores the decrypted DEK in hypervisor memory to decrypt the volume.
"AWS KMS is designed so that no one, including AWS employees, can retrieve your plaintext CMKs from the service. The service uses hardware security modules (HSMs) that have been validated under FIPS 140-2, or are in the process of being validated, to protect the confidentiality and integrity of your keys regardless of whether you use AWS KMS or AWS CloudHSM to create your keys or you import them into the service yourself. Your plaintext CMKs never leave the HSMs, are never written to disk and are only ever used in the volatile memory of the HSMs for the time needed to perform your requested cryptographic operation. AWS KMS keys are never transmitted outside of the AWS regions in which they were created. Updates to software on the service hosts and to the AWS KMS HSM firmware is controlled by multi-party access control that is audited and reviewed by an independent group within Amazon as well as a NIST-certified lab in compliance with FIPS 140-2."[1]
> It’s almost certainly “trust us.” The only way to access the internet without relying on trust is through TOR.
At the end of the day, you really don't know who is monitoring or what is running on Tor exit nodes, or moreover, if you're routing through a series of nodes that are controlled by the same anonymous operation.
Only if you have proof a server is not being tampered with, proof of what is running on a server and that thing running is verifiably locking our access out of the system and not logging, will you have proof that you are truly private.
It requires all of the above, and it's a hard problem to solve, but we're committed to solving it at Private Internet Access, and that's where we are headed.
Only when this is deployed, will people have continuous and verifiable privacy since the birth of the internet.
We were called the 'verified' no log VPN provider because we were the only legally proven no-log VPN, but we're going to take it a step further, to make sure we're verifiable, because you can verify at any time.
Okay, so how are you irrefutably proving a server is clean without physical monitoring?
Numerous proof of concepts have shown general physical proximity, not even direct access to the machine, can be enough for fruitful attacks. Likewise, is every package your server running audited, and signed? I hope your updates are manually certified. I hope your platform is trusted too and you're auditing/approving every bit sent out by the server and sanitizing anything sent to said server.
> Okay, so how are you irrefutably proving a server is clean without physical monitoring?
I imagine the idea is to have something like Intel SGX's enclave attest a hash of the filesystem image that was booted, then publish that filesystem. The filesystem should not allow any kinds of modification or login. If the machine's hosted somewhere like AWS, that quickly gets you towards the point where you could believe that it's not plausible for PIA to alter that machine once it's booted, and can see for yourself that it does not store or transmit logs.
If it's not hosted on an independent cloud provider like AWS, I don't think it's possible. A belief that the physical hoster isn't going to collude with the group deploying the machine to take advantage of physical access seems like a requirement. I might not trust Amazon, and I might not trust PIA, but I can probably trust that Amazon isn't willing to throw away its reputation by backdooring its security offerings in collusion with PIA.
(Although note that SGX claims to be resistant even against physical access -- the private key never leaves the enclave and will only sign statements in a tamperproof way.)
FWIW, I don't think this particular use case is well suited to a Secure Boot scheme, although I admire the goal. The logging could simply be happening on a machine that your packets reach before the provably clean machine, unless the very first PIA-owned machine you hit is one of these transparent end nodes, I guess?
I am curious what your verification solution is. But please don't reinforce misconceptions about Tor to promote your product.
> To really DE-anonymize someone this way, you need to at least have the entry node and exit node of a Tor user... entry nodes are chosen once, and are kept for 2/3 months... if the government wants to become your entry node it has N% chance to be picked by you out of 6000+ nodes. If I am lucky, and pick a non-government node, the government will have to keep all their nodes running (costing lots of money) for at least two months before they get another chance of becoming your entry. Also it takes At least 8 days, maximum of 68 days before it gets up to full speed, to become a Guard node, as you see, this is slow, expensive, and generally a very unattractive way of finding a Tor user. While yes, they COULD do it, it wouldn't make sense for them to do it as there are a lot of attacks out there that are a lot cheaper to execute and try out. In the Tor stinks slides that were leaked in the Snowden documents, it was stated that they could de-anonymize a very small fraction of people, but it can not be used to target specific people on demand. which makes this expensive attack, not worth it in a real life scenario.
VPNs are good for hiding your traffic from your ISP, but it's trivial for the government to issue a warrant and gag order on your VPN traffic. So I'm curious what your solution is.
As much as I love Tor -- or at least, find it useful -- that's a horrible misconception.
You're trusting a bunch of stuff, in using Tor. There's no way to know what share of relays are malicious. Or how many undisclosed vulnerabilities are in active use. Or whether at least some Tor Project staff are failing to disclose malicious relays and vulnerabilities.
You just don't know.
That doesn't mean you avoid using Tor. Because, in theory, there are no better options. But it does mean that you use it carefully.
For example, always hit an entry guard through at least a VPN. Better, through nested VPN chains. And use firewall rules to prevent leaks. Not just Tor browser in Windows.
I suppose you are correct, Tor is not 100% trust free. But with a VPN, all trust is placed in a single party. With Tor, trust is divided between nodes - connecting to a single malicious node won't hurt you. You don't have to trust the software either, since you can read the source code to ensure trust is divided properly. But even with an open-source VPN client, you have to trust the server.
> Always hit an entry guard through at least a VPN.
That's a horrible misconception. There is no added benefit to using Tor over a VPN. It only worsens the risk - you're essentially creating a permanent entry node with a money trail.
Well, there are normally seven hops for that, not just three. So malicious relays are less problematic.
But then there's the risk from undisclosed vulnerabilities. In early 2014, CMU researchers deanonymized an unknown number of Tor users and onion sites using the "relay early" bug. The bug allowed relays to communicate covertly in the process of circuit establishment. So malicious relays run by CMU could identify each other, and cooperate to deanonymize circuits.
And then the FBI subpoenaed all their data. It took over at least one onion site (Playpen) and then pushed its NIT malware to perhaps hundreds of users. Who were then arrested and prosecuted.
So when did the Tor Project learn about the "relay early" bug? They claim that they didn't know, and didn't notice the suspicious relay activity, until after the CMU people went public. But how do we know? Indeed, from what I've seen of the FoIA production about the Tor Project, I'm not so confident that they don't cooperate with the FBI etc.
As far as I know, even when you use hidden services it is enough for the 1st and the 3rd to be malicious in order to de-anonymise a user. The tor security is barely enough (if it is enough at all) - there is no reason to believe that the FBI/FSB/etc don't have enough relays up in order to de-anonymise most users. I2P is much better in that regard.
Heck, until recently the tor team used 80-bit truncated sha-1, 1024-bit rsa, and 128-bit AES for their traffic, not to mention that the tor browser ships with javascript enabled by default.
In theory, I don't see how entry and exit are enough to deanonymize. They don't even know that they're in the same circuit unless they have a covert channel (like relay early) or manage traffic correlation during the ~10 minute circuit lifetime.
And then, using onion services, there are two three-relay circuits that meet at a rendezvous point. One picked by the onion, and the other by the user. So even deanonymizing one of those circuits would be insufficient.
But that's all theoretical. In practice, there are likely undisclosed vulnerabilities. Perhaps lots of them.
I do agree that the Tor browser standalone is rather a joke. Especially if it's in Windows. You at least want to be using Whonix. And if you really care, Whonix in Qubes.
> They don't even know that they're in the same circuit unless they have a covert channel (like relay early) or manage traffic correlation during the ~10 minute circuit lifetime.
No reason to think that they would not do that.
> And then, using onion services, there are two three-relay circuits that meet at a rendezvous point. One picked by the onion, and the other by the user. So even deanonymizing one of those circuits would be insufficient.
It would be sufficient to de-anonymise one of the parties.
> In practice, there are likely undisclosed vulnerabilities. Perhaps lots of them.
My point is that you do not need an undisclosed vulnerability to break tor if you have enough resources.
One major threat factor that Tor doesn't have a bulletproof solution to, and likely never will, is correlation attacks. It's been shown to be plausible that by observing the timing and size of packets, even without knowing the contents, is enough to determine that two relays are part of the same circuit.
Well, any system that relies on tunneling is vulnerable to correlation attacks. And drilling down by looking at traffic between autonomous systems. Unless it uses chaff to maintain constant throughput.
But will doing that be worth it to find someone like me? I doubt it. I'm just a hobbyist and writer.
Is this going to be "nobody can access it because we locked ourselves out (trust us)", or some sort of trusted computing solution that's cryptographically verifiable?