Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
PIA open-sources and announces audit plans (privateinternetaccess.com)
281 points by rasengan on Dec 17, 2019 | hide | past | favorite | 157 comments


> We encourage everyone NOT to trust, but instead, to verify.

I love this. I also don't know how you could possibly make it work.

> We’re building an internal roadmap...

Keyword: Roadmap

Translation: We haven't started on this, but it's something we think we want to do so we're going to start talking about it.

> ...to create a transparent and verifiable infrastructure...

How is a user supposed to verify the infrastructure? Do you mean you're going to hire a third-party auditor and ask users to trust their verification?

I mean, that's better than nothing but it doesn't exactly enable everyone "not to trust, but instead, verify." In this scenario, the auditor is the only one doing the verification.

> ...in which no one, including ourselves, is permitted access to the servers through which VPN traffic flows.

Then how do you deploy the servers in the first place? How do you identify and handle hardware and software failures? How are you defining "access" such that this idea is possible?


Regarding verifying the application running server side matches the expected OSS version, Signal has an innovative approach as a part of their zero-knowledge contact discovery system.

> Run a contact discovery service in a secure SGX enclave. > Clients that wish to perform contact discovery negotiate a secure connection over the network all the way through the remote OS to the enclave. > Clients perform remote attestation to ensure that the code which is running in the enclave is the same as the expected published open source code. > Clients transmit the encrypted identifiers from their address book to the enclave. > The enclave looks up a client’s contacts in the set of all registered users and encrypts the results back to the client.

>Since the enclave attests to the software that’s running remotely, and since the remote server and OS have no visibility into the enclave, the service learns nothing about the contents of the client request. It’s almost as if the client is executing the query locally on the client device.

It's not perfect, but it is a huge improvement over blindly trusting the software running on a third party server. There's a lot more detail, including pitfalls, work arounds, limitations, and of course source code, in their blog post on the subject :

https://signal.org/blog/private-contact-discovery/


The same SGX that was on the front page a couple of days ago? https://qht.co/item?id=21759683


I don't suppose there's any more information on how remote attestation works? (Ideally something an idiot like me could comprehend).

I can't understand how, if the company has control of the code running there, they can't just modify it to report as the known good code. It seems like it'd be slightly different from the DRM example where the end user can't access the code running in the enclave in the first place and doesn't know what it'd be reporting back.


I can't edit my comment, but after reading up on Intel's secure enclave last night, I still don't get how this would work in practise. As the end user (me) needs to know what the server is going to return in order to verify that, I don't understand what's stopping the server from returning that anyway. Even if it's using public key crypto to sign a challenge I send it, I still don't understand how I can have any assurance that this key only exists inside the enclave and isn't just running in software on the server.


> How is a user supposed to verify the infrastructure?

This is a bit outside of my league and thus might be completely irrelevant, but Signal does something interesting in this area: https://signal.org/blog/private-contact-discovery/

> Modern Intel chips support a feature called Software Guard Extensions (SGX). SGX allows applications to provision a “secure enclave” that is isolated from the host operating system and kernel, similar to technologies like ARM’s TrustZone. SGX enclaves also support a feature called remote attestation. Remote attestation provides a cryptographic guarantee of the code that is running in a remote enclave over a network.

> Originally designed for DRM applications, most SGX examples imagine an SGX enclave running on a client. This would allow a server to stream media content to a client enclave with the assurance that the client software requesting the media is the “authentic” software that will play the media only once, instead of custom software that reverse engineered the network API call and will publish the media as a torrent instead.

> However, we can invert the traditional SGX relationship to run a secure enclave on the server. An SGX enclave on the server-side would enable a service to perform computations on encrypted client data without learning the content of the data or the result of the computation.


AWS announced something similar this past week for EC2: https://qht.co/item?id=21717114


Regarding deploying servers that you don’t have access to, you could deploy them with a boot loader that netboots signed images, require a certain number of developers to sign each image. I’ve considered something similar for deploying k8s workers - just a ramdisk that encrypts local storage with a random key, then requests admission to the cluster.

Troubleshooting is harder, but I can imagine a fault reporting component that anonymises error reports and uploads them to a public-visible tracker. If something looks like a hardware failure, take it out of the infra and boot a traditional OS to do some diagnostics.


> netboots signed images

There is no safe way to do this. Netboot picks up the fastest dhcp/tftp server to reply.

which means there is nothing to stop something booting a hypervisor, then booting your image.


I think you missed a few words there: "with a boot loader that netboots signed images".

They won't be relying on the BIOS netboot that picks up the first DHCP server, but rather a custom bootloader that will check the signature on the image before running it.


Which you can't really trust for the same reason.


Why not? If you're writing the bootloader (or using one that's trustworthy and open source) and deploying it by yourselves then it should be relatively safe. An attacker would have to have physical access to the hardware, or some method of modifying the bootloader on disk.


Unfortunately, for this use case the attacker that users are worried about are also the ones that will be installing the bootloader on the server in the first place. I don't see, from a computer science point of view, a process for verification that can't be emulated by the system that runs it.


If they hypothetically solved this problem, wouldn’t that be amazing for the security and privacy world?l to have a model they could copy?


>How is a user supposed to verify the infrastructure?

May be opening the logs for intrusion detection systems for their servers could help user be aware of external attacks. Also may be internal access logs?


Trusting trust. What if the logs lie about what they're logging?


Well, one needs to make the leap of faith somewhere.

Though if someone was faking their logs, I'm pretty sure it's possibly to see that there's something fishy going on.

At the end of the day, only you can make the judgement of what constitutes secure enough relating to your own infosec.


Agree completely. I'm choosing to not make my leap of faith trusting a scammy VPN provider after they publish plain text files that claim they're really good people at heart.

Maybe I'll make my leap of faith trusting someone with a little more at stake, and a reputation for being trustworthy.


"Maybe I'll make my leap of faith trusting someone with a little more at stake, and a reputation for being trustworthy."

Which VPN provider in your opinion would you consider "trustworthy"???

My understanding PIA was a great option until the most recent buyout... now I don't know who to throw my $$$ at... just asking


Right now I use Mullvad, because they give off trustworthy vibes and let me use the default wireguard apps instead of custom apps that could give them more access to my computer. They generally appear the least sketchy, but I don't have the knowledge to recommend them per se.


It's not at all a new idea.

Cryptohippie has been claiming something like this for over a decade.[0]

> Cryptohippie USA, Inc. provides access to the Cryptohippie network, which is anonymous, thoroughly protected and globally distributed. This network creates anonymity and gives you very private access to standard Internet destinations.

> Cryptohippie is unique: we don't require you to trust a single entity for your privacy. Our sales and network are run by separate companies in different jurisdictions. The sales company never sees network traffic and the network company never sees sales data.

The network provider being Cryptohippie Inc., Panama.[1] And one of their services is "Anonymous Admin".[2]

> System administrators are prime targets for blackmail, corruption and outside force, and are in a central position to undermine your communication and data security.

> Cryptohippie Inc. operates an Anonymous Administrator pool that can be utilized to oversee critical enterprise infrastructure or administer highly sensitive information resources.

> Anonymous Administrators can not be targeted by third parties. Not even Cryptohippie Inc. can give away their personal information or identity which could put them at risk of under force.

> Using auditing and concurrent peer review processes, Anonymous Administrators are under your control and the risk of them becoming insider attackers is greatly reduced.

> Only carefully tested, hand-picked specialists with a long standing relationship with Cryptohippie Inc. are available for client work.

> Please be advised that Cryptohippie Inc. only facilitates the contact to the pool of administrators and helps with setting up processes for communication, auditing and payment escrow. We will not know what specific administrator works for your organization or what tasks you delegate to them. However, in case of conflict, we will mediate/arbitrate upon request.

The argument, I gather, is that substantive acts require consensus of multiple anonymous admins. So rogue admins are not so much an issue.

But still, as you say, how can users trust any of that?

So yes, there is no "verify".

That's why I recommend nested VPN chains. You just need to trust that most of the VPN services in your chain are not working together for an adversary.

0) https://secure.cryptohippie.com/products.php

1) http://www.cryptohippie.net/

2) http://www.cryptohippie.net/AnonAdmin.html

Edit: About that "over a decade" comment, here's a snapshot from 2008: https://web.archive.org/web/20081006023454/http://cryptohipp...


> That's why I recommend nested VPN chains. You just need to trust that most of the VPN services in your chain are not working together for an adversary.

In general in cryptography, it's possible to combine secure things and get a less-secure thing. I don't know if this is the case for VPNs. Have you analysed the issue in depth?


This isn't about cryptography. It's just that any VPN server must know the IP addresses it's communicating with. So with one VPN, someone can just look at the traffic logs.

But with two VPNs chained, neither one knows both your IP and the IPs of the sites that you access. And the longer the chain, the more collusion an adversary needs.

But yes, you're also encapsulating multiple levels of encryption. I do appreciate that potentially increasing the risk of breaking the encryption. However, I'm willing to accept that in exchange for not being so trivially findable.


I didn't mean necessarily you'll break the crypto primatives, more that these kinds of systems can interact in weird ways if you don't go through everything very carefully.

How do you set up the chains? Do you mean?

    You ---[vpn_1]---> your server        
    server ---[vpn_2]---> ISP
Or do you do

        ===[vpn_1]=====
   You  ----[proxy_server_1]--- ISP 
        ===============
Or differently? There's no way to tunnel a VPN in a VPN COTS.

Off the top of my (nonexpert) head, some adversaries can try a bunch of different correlation attacks and if they can associate your traffic with any of your VPNs they can work to track your transactions to pay for it. So having more VPNs gives them a greater chance of being able to access the data of one and a greater chance of you screwing up the anonymous purchase of one.

Also I'm pretty sure you mostly have to trust the last VPN in your chain anyway. So why not just use only that one?


It's simple.

I connect to a server of one VPN service. Then, through that VPN tunnel, I connect to a server of a second VPN service. Then, through that VPN tunnel, I connect to a server of a third VPN service. That's what I mean by nested. I don't use custom VPN clients. Just OpenVPN.

In the setup that I'm currently using, one VPN client runs in a VirtualBox host machine. And the others run in pfSense VMs.[0] Each pfSense instance uses ~500MB RAM, however.

But it also works in a single machine, using routing and iptables rules.[1] For a chain of three VPNs, these OUTPUT rules prevent leaks, with internet traffic restricted to the third VPN tunnel (tun2):

   -A OUTPUT -o enp0s3 -d VPN0 -j ACCEPT
   -A OUTPUT -o tun0 -d VPN1 -j ACCEPT
   -A OUTPUT -o tun1 -d VPN2 -j ACCEPT
   -A OUTPUT -o tun2 -j ACCEPT
   -A OUTPUT -j DROP
VPN0, VPN1 and VPN2 are the IPv4 addresses of the VPN servers that I'm using. From different VPN services. The script that sets up the chain alters the routing so that each VPN connects through the previous one. There's no isolation between the different VPN clients, but that also makes it easy to switch chains periodically. Sort of like Tor.

I pay for the first VPN with a credit card. But the rest I pay with Bitcoin, which has been mixed multiple times through Tor, using a different mixing service at each step. I used Bitcoin that's been mixed more times for VPNs that I connect to less directly. And I never lease different VPNs with the same batch of mixed Bitcoin.

The last VPN know only what I connect to, and what VPN I connected from.

0) https://www.ivpn.net/privacy-guides/advanced-privacy-and-ano...

1) https://github.com/mirimir/vpnchains


Thanks for the reply. Guess SE was wrong about the configs. If you don't mind me asking, why cc for the first hop?

Also, vpnchains explicitly says they can't protect against adversaries with a global view od the network. That was the main weakness I assumed this would probably have. I'd buy that it protects from for instance VPN providers selling logs.


I use a card for the first one because it's gotta know my IP, and my ISP can link me to my IP.

Nothing can protect from adversaries with a global view of the internet. At least, for low-latency activity. If the latency is high enough, and there's lots of chaff, there's maybe a chance. Or at least, there are too many false positives to deal with.

Otherwise, you just make it harder. Once you have more than a couple VPNs in a chain, and especially if you switch chains frequently, latency varies a lot. So that likely helps too.

Edit: There is nothing in OpenVPN about routing one VPN through another. That's all done in the host networking stack, or using multiple routers with NAT forwarding. There is, however, a SOCKS5 proxy option, which you can use to route a VPN through Tor. Or you can just run a VPN server as an onion service.


You might want to look into wireguard.

It has much lower over-head then OpenVPN, and in your case the inter-connection latency/processing would be improved.


But why go through so much trouble instead of just using Tor, which AFAIK gives you 3 hops right out of the box?


I do use Tor. Which gives me three hops for clearnet sites, and seven hops for onion services. But I don't trust Tor enough to connect directly to it. Given the risk of malicious relays, and of 0days like "relay early" that increase their effectiveness. And then there's the likelihood that TLAs track all connections to the Tor network.

So the nested VPN chains isolate me from Tor entry guards. If Tor circuits get deanonymized, an adversary just learns a VPN exit. Or at least, an adversary that isn't doing global traffic analysis.

But as Mirimir, I don't need all that. I really don't need anonymity at all, except to protect my professional reputation. And being retired, that doesn't matter any more.

So anyway, nested VPN chains are arguably enough for people who aren't juicy targets. They almost certainly aren't as good as Tor, unless Tor is more compromised than it seems.

But what they are is lots faster. There's somewhat less latency than Tor. And bandwidth is much higher.


Switched to Mullvad [0] after PIA was bought by a malware distributor [1][2] and hired Mark Karpeles as their CTO [3].

The network is faster. Wirecutter recommends them [4]. If you want it, the Mac app is stable and just works. And they support both OpenVPN and WireGuard.

[0] https://mullvad.net/en/

[1] https://torrentfreak.com/private-internet-access-to-be-acqui...

[2] https://restoreprivacy.com/cyberghost/

[3] https://www.engadget.com/2018/04/22/mt-gox-chief-returns-as-...

[4] https://thewirecutter.com/reviews/best-vpn-service/


> after PIA was bought by a malware distributor

This is a false statement [1]

> and hired Mark Karpeles as their CTO

This is also a false statement [2]

[1] https://qht.co/item?id=21681711

[2] https://qht.co/item?id=21821832


> This is a false statement

My comment cites a number of third-party criticisms. Yours cites a PIA PR statement.

> This is also a false statement

Mark Karpeles being hired as the CTO of PIA's parent company versus PIA per se is a legal distinction, not a practical one.

His having no "operational role" is vague, unverifiable and misleading with respect to his title. PIA sells trust. Its parent company hired a convicted liar to its C suite. These are good reasons to decamp.


> Yours cites a PIA PR statement.

This is false.

The rest of your opinions are, as you said, unverifiable.

Regardless, i appreciate your opinions and we are going to learn from them to become an even more solid company to ensure that we can continue to deliver on the promise that has been verified countless times - that we are the only proven no log VPN provider, and multiple times.

Good luck with your choice and hope it works out for you. You can continue to choose VPNs on trust -- I suggest that everyone else choose VPNs on verification.

Cheers,

Andrew


> The rest of your opinions are, as you said, unverifiable

Calling a Tokyo court's criminal judgement [1] an unverifiable opinion is problematic. (It explains the hire, I suppose.)

> we are going to learn from them

For what it's worth, I believe you're well meaning. But you've given away the keys to the kingdom. Your intentions are less relevant. (Best case, I imagine, is you pull a Jan Koum in a couple years.)

If a VPN user wants a vetted, verified, open-source client, they should use OpenVPN or WireGuard's. (If they want a convenient client, they use the VPN provider's.) The unavoidable trust in a VPN relationship is in what's running on the servers. That's a people problem. And PIA has serious people problems.

[1] https://www.wsj.com/articles/former-mt-gox-bitcoin-bigwig-fo...


[flagged]


> assumptions without verification

With all due respect, if you live in a world where criminal convictions are an unverified assumption, that’s problematic.

You’ve linked to zero external sources. Others have. PIA et al hired a convicted liar. Competitors haven’t. PIA et al sold to a company that distributes malware, gutted another VPN with dark patterns and then lied about all of that; others haven’t.

These are substantiated points. Linking to comments from your colleagues is as disingenuous as wrapping a Kape VPN as trustworthy.


It is not an assumption or false statement that I had bitcoins in Mt.Gox and now want nothing to do with Mark Karpeles or anything under him :)


It would be nice if you added the disclaimer that you are the founder of PIA, not some unbiased grassroots guy.


Mullvad does not have split tunneling, so it's a no-go from me. Also their client looks like ass and is a RAM hungry electron app.


Their client? Just use OpenVPN or wireguard?


Any reason not to recommend TunnelBear, which is Wirecutter's #1 recommendation? I don't use either of them so I don't have a horse in this race, just curious why you picked the #2 recommendation and not the #1.


TB was acquired by McAfee, if you trust them then good luck.


No strong reason. Both good candidates, and both are better than PIA.

I liked the e-mail-free sign-up and WireGuard support. It also seems marketed to moderately-technical users (setting up WireGuard on iOS is trivial). That makes it less likely to get acquired by scammers.


If you are into torrenting then they won't be good match either.


Country of origin maybe?


Torguard is another great option. Also OpenVPN and WireGuard (along with PPTP), they're half the price and in my testing they were slightly (~10%) faster than Mullvad.

If you got the money I'd recommend Mullvad though.


Not sure I understand this. Torguard seems $10/month and even multi month plan seems at least as much as Mullvad.


They have a lifetime 50% discount code that's pretty much permanently active since 2015 or something. I'm not sure if they'd happy if I post it here but its extremely easy to find.


I'll have to check them out. I was going by thatoneprivacysite and saw BlackVPN had good reviews. Trying them out now.


Wasn't PIA bought by an offshoot from a shady malware producing company called Kape? Isn't PIA's new CTO Mark Karpeles, one of the ringleaders of the Mt. Gox debacle, who stole millions of dollars and was convicted for it in Japan? Nothing says safety like malware and fellons in your VPN provider.

Reddit thread with a decent timeline of events: https://www.reddit.com/r/PrivateInternetAccess/comments/e9fo...


As a r/buttcoin regular I couldn't help but chuckle loudly in the office here reading this. Had to double check and laughed even more. The utter absurdity of it all.

He actually seems like a decent guy who got in well over his head and did prison time for it in Japan, but as the saying goes "all publicity is good publicity", you literally can't do wrong in the corporate world.

Patiently waiting for Elizabeth Holmes next endeavour.


I may have been laying it on a little thick, but honestly, letting him become CTO of a major VPN provider... A whole sea of capable, professional tech leaders out there, and they choose a blockchain enthusiast with a criminal record and a history of incapability. I get the feeling that Kape wants someone in charge who won't say no to shady practices.


I get where you’re coming from. If I’m going to choose one VPN company to use, it won’t be one with him as CTO.


Exactly. It's almost like they are trying to filter out anyone with sense.


This does seem conveniently timed, right after the controversy of a few weeks ago has died down.

Hopefully wireguard will be accepted into the mainline kernel soon and it will become much easier for people to run their own VPNs.

I will not be using PIA for the remainder of my subscription, and won't be renewing it.

Previous discussion here: https://qht.co/item?id=21679682


I wonder more than coincidence if users started haemorrhaging.

I turned of auto-renewal the day I saw that announcement and will be looking for a new VPN in ~4 months or so.

Given most people with VPN's are likely privacy conscious, seems amazing a good percentage wont be doing the same given the groups reputation that took over.


Same, moved to Windscribe but was torn between them and Mulvad. PIA lost me and they won't bring me back short of a full refund of my existing sub with them and with Windscribe.

But I'd just take the money and move on anyway.


This looks like a PR move after their really bad recent press.

Open Sourcing is just seen as easy guaranteed good PR even if it is just something less important (client).


Hah, I hadn't realized Karpeles was involved. That's a pretty wild CTO choice.

After all that's happened, going open source is pretty much their only option.


I can see how they can open source their client side applications, and I guess they can open source their server code, but I can't wrap my head around how we can verify what exactly they are running on their servers. Like, can we ssh into them with root access and poke around running processes? I just don't get it. Anyone have a clue how else that kind of verification can happen?


it's not possible

and even if it was they could just configure their switches to mirror ports to send the traffic somewhere else for logging

this entire thing is snake oil designed to fool people that don't know any better

trust is earnt... and PIA hired Mark Karpeles, put him in charge of technology then later sold out to a malware firm

if you insist on having a VPN: there's plenty of other firms that don't have these "attributes"


What alternatives do you suggest? I just cancelled my PIA subscription and am in the market.


FoxyProxy offers a branded VPN service and the proceeds support its development: https://getfoxyproxy.org/order .

I can't vouch for it personally and all I know is that it supports an open-source extension I've relied on for more than a decade, but at least it beats contributing to Karpeles' paycheck, right?


I've heard good things about Mullvad and IPredator, both from Sweden. Generally you want to be looking for VPN providers that don't spend half their revenue on marketing.


I use Mullvad with the standard wireguard clients. Works much better than anything OpenVPN based. They are based in Sweden which seems like a reasonable jurisdiction.


Mullvad or (depending on if it has launched yet) the Firefox branding-over-top-Mullvad for the same price.


Yeah I think you're right


The biggest problem I have with this is that Mark Karpeles is, provably, a bad programmer...


Yeah, it's not just the fraud thing; Karpeles shit the bed on Mtgox multiple times. I actually kind of believe the narrative that it was incompetence more than malice, at least to start with.


My name is Chris M and I am the CMO for Private Internet Access.

PIA management has prepared the following statement regarding Mark Karpeles’s role at LTMH and we hope it addresses your doubts or concerns:

In 2018, Mark Karpeles was appointed CTO of LTMH, which was the parent company of Private Internet Access prior to KAPE. However, Mark never had an operational role in PIA and subsequently never had access to any part of the PIA infrastructure nor any role in the planning or execution of the day-to-day operations of PIA.

The role of CTO at PIA has been run collectively by Tommie P. (SvP Software, joined January 2017) handling the software development side and Gaurav G. (CIO, joined January 2015) handling the operational and infrastructure side of the business.

Mark’s role at LTMH has predominantly been to manage development teams working on FutureFC, general R&D, and providing a broader perspective of the industry as a whole rather than PIA specific issues. His work is best summarized as a valued external consultant for specific discussions related to the advancement of our privacy and security efforts.

As part of the merger with KAPE to become Private Internet, Mark currently has no operational role in the merged entity and is pursuing other endeavors.


Are all your execs now spreading misinformation in random HN threads? Was your CEO not enough?

Making it sound like Mark Karpeles had no real association with PIA is disingenuous.

> However, Mark never had an operational role in PIA and subsequently never had access to any part of the PIA infrastructure nor any role in the planning or execution of the day-to-day operations of PIA.

You say this yet PIA says diferent in your official statement

> Specifically, as many have wondered, he is working as the global CTO across LTM, reviewing technical architecture from an efficiency and security perspective and providing advice and guidance. https://www.privateinternetaccess.com/blog/2018/04/why-i-hir...

Sure, maybe it's the parent company but based on how they were structured it seems likely that the CTO of LTM had a lot of influence. Moreover, he was hired as a CTO somewhere by the same people making the big decision at PIA..

Also we can look through your CEO's comments from 2 weeks about Mark. E.g.

>You don’t know a person until you see them in the toughest of situations. Mark is a good man and a great developer who fought on the front lines and I’ve known for a long time. https://qht.co/item?id=21684155

Seems like he trusts him.


Is this statement a joke? Who makes a statement from a company by saying they are making a statement by the management? What's the point? To look official? Next, this "statement" says nothing. Of course that's the point, but at least be creative. You should be, because I can imagine how challenging it is to defend something universally indefensible.

How dumb do you have to be to make this guy an officer. He could've been the chef in the basement washing just as much money.

Associations with Mark Karpeles are like wearing blackface: it's never okay. Never. It doesn't matter what the reason is.


I'm sure his role is perfectly defensible and his experience a valuable asset for PIA in this space, but I think the issue is the optics of such a hire and what it says about other decisions that have happened or have yet to happen regarding the sale/direction of PIA. It's just especially odd considering what seems to me like obvious overlap between "People interested in having VPN service", "People interested in Bitcoin", and "People who are distrustful of Mark Karpeles".


I can understand how you would feel that way if Mark was hired as the CTO of PIA, but as we stated before, he never had any operational role or access to anything at PIA.


Yep, I agree. That's why I said it was perfectly defensible. Getting burned by Mark in the past causes me a bad _feeling_, though, and the world of feeling and the world of logic often don't intersect, even within the same person. Basically I acknowledge that my feelings aren't logical, but that doesn't make them not exist.


How can we verify that he doesn't contribute to the codebase? Are there some git logs or something we can look through?


As a marketer, whoever wrote this statement should not work in marketing... at best it is simply unclear. At worst it sounds inconsistent in the truth which doesn't help the 'trust us' goal of the statement.


Chris M, I appreciate you responding to this. I was a PIA customer for years, until recently, and it's nice hearing something from your company other than "Don't trust, verify!". Direct information like this goes a long way towards reestablishing PIA's credibility. Unsubstantiated (and likely impossible) marketing ploys do the opposite. Most of what I've read and deduced about the recent changes in your company's structure has been extremely negative, and full of red flags. In my opinion the first half of PIA's new marketing slogan is an ironic reality: "Don't trust."


>Verifiable Zero Access: Start! – We’re building an internal roadmap to create a transparent and verifiable infrastructure, in which no one, including ourselves, is permitted access to the servers through which VPN traffic flows. We will keep you abreast of all progress, and moreover, this will be a community-led effort. Verifiable Zero Access proves that we cannot log or monitor your traffic.

Is this going to be "nobody can access it because we locked ourselves out (trust us)", or some sort of trusted computing solution that's cryptographically verifiable?


I've heard it's possible to set something like that up on AWS, but of course Amazon could still access it.


I've built a system like this recently for a payments platform. Access _is_ possible but requires rebuilding the environment (and thus blowing everything away) as well as admin access.


Is it possible to verify that you cannot access said system, though? How would that even be done? In most scenarios I can imagine you're still rely on the server telling you something about itself... which it can lie about.


You built in on AWS? Using some application of CloudHSM?


I set this up to force myself to stop SSHing to boxes all the time and trust in my own automation. It took some effort and setup is frustrating but it ended up being a net positive.

I baked everything with Ansible and did last touch setup with user-data, and deployed it all with Terraform.


>but of course Amazon could still access it

That depends upon the key management. Even with default encryption, keymanagement facilities available for EBS, S3, RDS amazon can be locked out; the key resides with the owner.


So how does a machine boot?


EC2 sends DEK(Data Encryption Key) from the volume Metadata to the KMS (Key Management Service), KMS decrypts the DEK with CMS(Customer Master Keys), EC2 stores the decrypted DEK in hypervisor memory to decrypt the volume.


Something somewhere has access to the key, though, unless somehow it's always escrowed instead a Secure Enclave / TPM / similar.


Who owns the hypervisor memory?


The same entity which owns every other hardware in this infrastructure.


So then "Amazon could still access it", right?


According to Amazon,

"AWS KMS is designed so that no one, including AWS employees, can retrieve your plaintext CMKs from the service. The service uses hardware security modules (HSMs) that have been validated under FIPS 140-2, or are in the process of being validated, to protect the confidentiality and integrity of your keys regardless of whether you use AWS KMS or AWS CloudHSM to create your keys or you import them into the service yourself. Your plaintext CMKs never leave the HSMs, are never written to disk and are only ever used in the volatile memory of the HSMs for the time needed to perform your requested cryptographic operation. AWS KMS keys are never transmitted outside of the AWS regions in which they were created. Updates to software on the service hosts and to the AWS KMS HSM firmware is controlled by multi-party access control that is audited and reviewed by an independent group within Amazon as well as a NIST-certified lab in compliance with FIPS 140-2."[1]

[1]https://aws.amazon.com/kms/faqs/


That sounds impressive. Thanks.


That depends. If it’s “bring your own key” then there’s nothing they can do.

That said - I’m not familiar enough with how they’d build that infra at scale to make it cost effective.


It’s almost certainly “trust us.” The only way to access the internet without relying on trust is through TOR.


> It’s almost certainly “trust us.” The only way to access the internet without relying on trust is through TOR.

At the end of the day, you really don't know who is monitoring or what is running on Tor exit nodes, or moreover, if you're routing through a series of nodes that are controlled by the same anonymous operation.

Only if you have proof a server is not being tampered with, proof of what is running on a server and that thing running is verifiably locking our access out of the system and not logging, will you have proof that you are truly private.

It requires all of the above, and it's a hard problem to solve, but we're committed to solving it at Private Internet Access, and that's where we are headed.

Only when this is deployed, will people have continuous and verifiable privacy since the birth of the internet.

We were called the 'verified' no log VPN provider because we were the only legally proven no-log VPN, but we're going to take it a step further, to make sure we're verifiable, because you can verify at any time.


Okay, so how are you irrefutably proving a server is clean without physical monitoring?

Numerous proof of concepts have shown general physical proximity, not even direct access to the machine, can be enough for fruitful attacks. Likewise, is every package your server running audited, and signed? I hope your updates are manually certified. I hope your platform is trusted too and you're auditing/approving every bit sent out by the server and sanitizing anything sent to said server.


> Okay, so how are you irrefutably proving a server is clean without physical monitoring?

I imagine the idea is to have something like Intel SGX's enclave attest a hash of the filesystem image that was booted, then publish that filesystem. The filesystem should not allow any kinds of modification or login. If the machine's hosted somewhere like AWS, that quickly gets you towards the point where you could believe that it's not plausible for PIA to alter that machine once it's booted, and can see for yourself that it does not store or transmit logs.

If it's not hosted on an independent cloud provider like AWS, I don't think it's possible. A belief that the physical hoster isn't going to collude with the group deploying the machine to take advantage of physical access seems like a requirement. I might not trust Amazon, and I might not trust PIA, but I can probably trust that Amazon isn't willing to throw away its reputation by backdooring its security offerings in collusion with PIA.

(Although note that SGX claims to be resistant even against physical access -- the private key never leaves the enclave and will only sign statements in a tamperproof way.)

FWIW, I don't think this particular use case is well suited to a Secure Boot scheme, although I admire the goal. The logging could simply be happening on a machine that your packets reach before the provably clean machine, unless the very first PIA-owned machine you hit is one of these transparent end nodes, I guess?


I am curious what your verification solution is. But please don't reinforce misconceptions about Tor to promote your product.

> To really DE-anonymize someone this way, you need to at least have the entry node and exit node of a Tor user... entry nodes are chosen once, and are kept for 2/3 months... if the government wants to become your entry node it has N% chance to be picked by you out of 6000+ nodes. If I am lucky, and pick a non-government node, the government will have to keep all their nodes running (costing lots of money) for at least two months before they get another chance of becoming your entry. Also it takes At least 8 days, maximum of 68 days before it gets up to full speed, to become a Guard node, as you see, this is slow, expensive, and generally a very unattractive way of finding a Tor user. While yes, they COULD do it, it wouldn't make sense for them to do it as there are a lot of attacks out there that are a lot cheaper to execute and try out. In the Tor stinks slides that were leaked in the Snowden documents, it was stated that they could de-anonymize a very small fraction of people, but it can not be used to target specific people on demand. which makes this expensive attack, not worth it in a real life scenario.

https://write.privacytools.io/my-thoughts-on-security/slicin...

VPNs are good for hiding your traffic from your ISP, but it's trivial for the government to issue a warrant and gag order on your VPN traffic. So I'm curious what your solution is.


As much as I love Tor -- or at least, find it useful -- that's a horrible misconception.

You're trusting a bunch of stuff, in using Tor. There's no way to know what share of relays are malicious. Or how many undisclosed vulnerabilities are in active use. Or whether at least some Tor Project staff are failing to disclose malicious relays and vulnerabilities.

You just don't know.

That doesn't mean you avoid using Tor. Because, in theory, there are no better options. But it does mean that you use it carefully.

For example, always hit an entry guard through at least a VPN. Better, through nested VPN chains. And use firewall rules to prevent leaks. Not just Tor browser in Windows.


I suppose you are correct, Tor is not 100% trust free. But with a VPN, all trust is placed in a single party. With Tor, trust is divided between nodes - connecting to a single malicious node won't hurt you. You don't have to trust the software either, since you can read the source code to ensure trust is divided properly. But even with an open-source VPN client, you have to trust the server.

> Always hit an entry guard through at least a VPN.

That's a horrible misconception. There is no added benefit to using Tor over a VPN. It only worsens the risk - you're essentially creating a permanent entry node with a money trail.


If you only use hidden services then most of the risks are mitigated to being almost entirely benign.


Well, there are normally seven hops for that, not just three. So malicious relays are less problematic.

But then there's the risk from undisclosed vulnerabilities. In early 2014, CMU researchers deanonymized an unknown number of Tor users and onion sites using the "relay early" bug. The bug allowed relays to communicate covertly in the process of circuit establishment. So malicious relays run by CMU could identify each other, and cooperate to deanonymize circuits.

And then the FBI subpoenaed all their data. It took over at least one onion site (Playpen) and then pushed its NIT malware to perhaps hundreds of users. Who were then arrested and prosecuted.

So when did the Tor Project learn about the "relay early" bug? They claim that they didn't know, and didn't notice the suspicious relay activity, until after the CMU people went public. But how do we know? Indeed, from what I've seen of the FoIA production about the Tor Project, I'm not so confident that they don't cooperate with the FBI etc.


As far as I know, even when you use hidden services it is enough for the 1st and the 3rd to be malicious in order to de-anonymise a user. The tor security is barely enough (if it is enough at all) - there is no reason to believe that the FBI/FSB/etc don't have enough relays up in order to de-anonymise most users. I2P is much better in that regard.

Heck, until recently the tor team used 80-bit truncated sha-1, 1024-bit rsa, and 128-bit AES for their traffic, not to mention that the tor browser ships with javascript enabled by default.


In theory, I don't see how entry and exit are enough to deanonymize. They don't even know that they're in the same circuit unless they have a covert channel (like relay early) or manage traffic correlation during the ~10 minute circuit lifetime.

And then, using onion services, there are two three-relay circuits that meet at a rendezvous point. One picked by the onion, and the other by the user. So even deanonymizing one of those circuits would be insufficient.

But that's all theoretical. In practice, there are likely undisclosed vulnerabilities. Perhaps lots of them.

I do agree that the Tor browser standalone is rather a joke. Especially if it's in Windows. You at least want to be using Whonix. And if you really care, Whonix in Qubes.


> They don't even know that they're in the same circuit unless they have a covert channel (like relay early) or manage traffic correlation during the ~10 minute circuit lifetime.

No reason to think that they would not do that.

> And then, using onion services, there are two three-relay circuits that meet at a rendezvous point. One picked by the onion, and the other by the user. So even deanonymizing one of those circuits would be insufficient.

It would be sufficient to de-anonymise one of the parties.

> In practice, there are likely undisclosed vulnerabilities. Perhaps lots of them.

My point is that you do not need an undisclosed vulnerability to break tor if you have enough resources.


One major threat factor that Tor doesn't have a bulletproof solution to, and likely never will, is correlation attacks. It's been shown to be plausible that by observing the timing and size of packets, even without knowing the contents, is enough to determine that two relays are part of the same circuit.


Well, any system that relies on tunneling is vulnerable to correlation attacks. And drilling down by looking at traffic between autonomous systems. Unless it uses chaff to maintain constant throughput.

But will doing that be worth it to find someone like me? I doubt it. I'm just a hobbyist and writer.


As far as I know garlic tunnelling that i2p uses is not as vulnerable.


> We have begun reaching out to external auditors and, in tandem, are opening up our operations to review by our users. This allows you to verify with your own eyes, whenever you want. WYSIWYG.

Technical side: color me very curious about that "our users" part. I can't wait to see what it entails.

Business side: bold move, it may prove to be a blue-ocean-like strategy, if substantiated. Interesting.

____

Some rambling about access, transparency and "open"-things

Most people don't know that under the terms of most democracies, in principle, any citizen has the right to go to any public office and request basically anything non-classified: documents, accounts, etc. The idea being that the State is but the expression of the sovereignty of the People.

Obviously, no one believes for one minute that any 'normal' citizen will be granted access to most things. It's just an artefact of idealism you hear in law classes and political activism circles I suppose (and it's grey-ish-ly somewhat applicable at the lowest, local level).

It would be interesting to see a new breed of "open-sourced" businesses granting access to customers (like States would to citizens), businesses whose value resides not in secret sauce — things like accounting, plumbing or VPN are probably "solved" matters of public knowledge. Because they can afford almost total transparency, they may as well weaponize it against rivals who can't — or won't. In markets where trust is anywhere from high-value to mission critical, this might just open wide a whole new blue ocean.


> Most people don't know that under the terms of most democracies, in principle, any citizen has the right to go to any public office and request basically anything non-classified: documents, accounts, etc. [..] Obviously, no one believes for one minute that any 'normal' citizen will be granted access to most things.

[citation needed]

In the US, there’s the federal Freedom of Information Act and state equivalents. You don’t walk into an office; you send requests online. And there are more exceptions than just classification, such as privacy. But it’s far from theoretical: normal citizens can and do obtain access to all sorts of interesting documents. Sometimes the agency will refuse to provide the information you want despite it not being covered by a valid exception, or tries to charge an unreasonable fee, but in that case you can sue them: that’s admittedly expensive and slow, but it’s possible. News organizations in particular do it all the time, and often end up winning.


Their business basically hinges on earning back the trust of their customers before the current subscriptions run out.

Having their own vpn applications put them in advantageous position when they were trusted but now its a liability. I don't see it as weaponizing so much as turning the liability into a trust buyback.


I wonder if they knew they'd get the negative reception, and that's why they offered a discount on multi-year subscriptions earlier this year. I bought a multi-year contract before they raised their monthly price, and from what I've seen on their subreddit, so did a lot of other people.


>Obviously, no one believes for one minute that any 'normal' citizen will be granted access to most things. It's just an artefact of idealism you hear in law classes and political activism circles I suppose (and it's grey-ish-ly somewhat applicable at the lowest, local level).

What are these "most things" that you're talking about? At least in the US, FOIA requests are incredibly powerful and quite easy to make. You can absolutely get access to most non-classified documents, with well defined exceptions (ie, privacy of others).


> Obviously, no one believes for one minute that any 'normal' citizen will be granted access to most things.

In the Netherlands we get reasonably close with what we call WOB requests (literally: Law of Public Governance).

It takes a long time before you get the data (months) and it gets manually redacted but requests like for example:

I would like to receive all internal communication of the ministery of Health with my name in the email body.

Generally, get honored.


FWIW it's not limited to your name, at least in Sweden it can be anything, but to some extent they can claim technical difficulties with performing the search. There are cases were one such request took thousands of hours to complete.


You can do the same in the US with FOIA


> Open Sourcing the PIA Clients

Love this trend of privacy companies open sourcing clients and not servers. (sarcasm)


But the client is all that matters. If the client doesn't send them your IP address, there is no way for them to track you in any way, shape or form. Their servers would implode if they tried. It's impossible. Your system is private. Now be quiet and keep paying them.

In case it needs to be said, that's all sarcasm.


If your IP was never sent wouldn't it be impossible for them to send you back the information you requested?

edit :: I think I replied to comment that has been edited between when I hit "reply" and when my comment appeared.


There's an easy technical solution to that, you can just use a VPN service to hide your IP.... when you connect to your VPN service.

It's VPN's all the way down...

I'm mostly joking, but if you purchase the second VPN anonymously (gift card paid with cash, bitcoin, etc) this would do a pretty good job of ensuring your anonymity against most casual snooping. It's not going to hide you from the FBI, but it would prevent either VPN provider from tying your browsing activity back to you (unless they cooperate with each other).


If only there was some kind of service where everyone routed each others traffic like a VPN with multiple hops by wrapping every message in layers of encryption like on onion.


Admittedly, I haven't used TOR in a long time, but last time I used it, it didn't work well for video, and I assumed most people using VPN's do it to hide their porn habits.

I used to use a VPN to hide my browsing activity from my ISP (Comcast), but now I have an ISP that I trust more, so pretty much only use VPN on public Wifi to help protect my traffic.


If it doesn't exist yet, we create this. I vote to call it TOR.


You is correct, he was being sarcastic I guess


Yes. This is why IP spoofing is useless for most anything two-way.


Blockchain could probably solve that problem..


Surely with the right kind of encryption you can just read the client code and not need to "trust" the server?


Not really. The server is a termination point for the crypto.

No matter what the client code says / does, they decrypt once you hit their server, the code in the client matters very little. Don't forget, this isn't end-to-end encryption, a la Signal.


If they did open source the server, is there any way to verify that they're actually running that open source server in production?


> Love this trend of privacy companies open sourcing clients and not servers.

Same as Telegram app


PIA Android client seems to be developed from a fork of OpenVPN Client for Android (ics-openvpn) [1] which is gpl-licensed. The About screen contains a link to the source code hosted on an S3 bucket [2] but it doesn't seem to be publicly accessible.

Interestingly, despite being gpl-licensed, ics-openvpn seem to be commonly forked by commercial vpn companies to develop their own closed-sourced Android vpn client. The author is aware of this and post a faq [3] out of frustration.

[1] https://github.com/schwabe/ics-openvpn

[2] https://s3.amazonaws.com/privateinternetaccess/sources/andro...

[3] https://github.com/schwabe/ics-openvpn/blob/master/doc/READM...


That sounds like an impossible promise. I wish them the best pulling it off though - would be amazing (and instantly copied)

Worth pointing out that PIA is openvpn compatible so you don't need their client


>Worth pointing out that PIA is openvpn compatible so you don't need their client

Are other VPN services not like this? How are you supposed to connect with Linux then? Any why would anyone who cares about privacy want to use a proprietary closed-source client?


> Worth pointing out that PIA is openvpn compatible so you don't need their client

Yeah, I was wondering about this. If you don't trust their client... don't use it? They've always been quite forthcoming on how to connect via OpenVPN.


The same day the Orchid project launches, no less... https://www.orchid.com/


Is Mark Karpeles or any other well known criminals involved in Orchid? Is it a pyramid scheme? I don't want to get burnt again or help fund scams.


Definitely not. The people behind it have great reputations and have good reputations outside the crypto space as well. That doesn't mean it will work, of course.


so, what's the long and short on this project?


Decentralized, crypto token based network. Kind of like Tor with monetary incentives. It's a new internet! https://youtu.be/0dJPY50lpZA

Discussed earlier... https://qht.co/item?id=15576457


Yeah, but the email I got said you need to buy OXT via Coinbase. And last I knew, Coinbase implements KYC.

But maybe anonymous exchanges will start handling it.

Edit: OK, I see that it says "...and elsewhere." So I'll wait and see how it goes. I'll need some way to get OXT from Bitcoin.

And damn, it doesn't help that OXT also means the "Open Exploration Tool" for tracking Bitcoin.


It's an ERC20, which means they're generally tradable on uniswap.exchange (and this one is).


OK, thanks.

I have a very picky Firefox config in this VM, and it loads just a blank page for https://uniswap.exchange/. However I did find https://masterthecrypto.com/uniswap/ so I could try it with Chrome. But not in this VM.

So that gets me from ETH to OXT. Now I just need a way from BTC to ETH. And it's gotta be one that doesn't implement KYC. I will have nothing to do with that bullshit.


Look up "eth btc atomic swap"


Orchid is an awesome project.


Incorporate in a country that has severe legal liability for breaking your word such as switzerland or sweden(currently UK) and then we'll see. Short of granting access to their hosting/IAM account how can I verify if something else other than the VPN terminator can access traffic logs?

This raises even more eyebrows for me, I hope they back this up some serious crypto,architectural design and audit goals/reqs. Even in AWS where a commenter said something like this is possibble, your LBs and vpc/cloudtrail like logs can still contain traffic and related metadata details.

Ooh this feels like such a smoke and mirrors show! Will they also be adding this design change to freenode irc servers?

A VPN company takes over the largest irc network,for what profit? Freenode was already advertising PIA like crazy. And now getting cozy with a for-profit malware(read crimeware) operation related org that had since corrected their old ways?

Please prove exactly how you can guarantee mosconfigirations, previously unknown bugs,KVM console access(or iLO) can't be used to impant very useful backdoor?

Linux servers? Yeah...if it was me I'll deploy the verifiable server, get kvm/iLO and at the next scheduled reboot edit the grub menu and set init=/bin/sh,mount the main fs as root ,implant any undetectable changes and reboot.

If you want to regain trust do it in a way that either makes you criminally liable or a civil contract that states all company assets,funds,profit and personal assets/wealth of all involved owners will be redistributed to all unassociated customers and users (even free users and random freenode users) if you violate your promise to not log traffic in any way including Layer 3/4 logs, store logs that are correlations and transcriptions of observed user or traffic events, or interfere with traffic in anyway, or if any founders or associates have any ties at all with the UK government,GCHQ or any governmental body or person at all. And if you can also clearly "open source" youd revenue streams compleley and disclose any current or planned means to profit as well as promise to disclose any future talks and plans of this nature promptly then I think most of the allegations will lose ground in your favor.


> * your LBs and vpc/cloudtrail like logs can still contain traffic and related metadata details.*

Yep, and you can learn a lot about what your users are doing by capturing Netflow data from the (router|switch) the VPN server is connected to.

I've never tried but I imagine that with enough data (traffic) in your dataset, it might even be possible to get a pretty good idea of which (de/un)encrypted traffic flows correspond to which users.


I remember taking a peek at the internal files of the PIA app in Mac back in 2013 and it looked like it was mainly ruby based? I guess it isn't anymore.


Yeah moved away from Ruby a while back. Took a little while to stabilise but the new app is a lot better these days.


A wild D.O appears. Fun watching all this PIA stuff being discussed on HA.


I've been thinking for a while that is in VPNs own interest to do cross-audits of no-log policy between VPN providers.


I thought it rather odd that Pakistan International Airlines would make their software open-source, so I clicked on the link and found out that PIA was something else entirely. Every time this happens to me I think of Ted Nelson. Sigh.


You could avoid this by looking at the domain name shown immediately after the link text, I suppose.


I stand corrected.


These guys sponsor the street beefs channel on youtube. I find the company and their guerrilla tactics fascinating.


This is a really awesome move by PIA. I have been waiting for them to open source for a while, so that's a good sign that this acquisition might actually be a good thing.

How are you planning to choose auditors? Has anyone already agreed to be an auditor?


> Random Audited Truths (I smell a rat!) – We have begun reaching out to external auditors and, in tandem, are opening up our operations to review by our users. This allows you to verify with your own eyes, whenever you want. WYSIWYG.

What if this gets used as cover to help exfiltrate data to their new parent company/third parties.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: