> [H]aving a paranoid but blame-free culture is critical for security. The three employees who fell for the phishing scam were not reprimanded. We’re all human and we make mistakes. It’s critically important that when we do, we report them and don’t cover them up.
That sounds to me like a very healthy culture. I wonder how many other organizations are similar?
I'd agree with that, we all make mistakes, and I could see myself groggily waking up to some text message like that and clicking through it before my full critical faculties have come online. The folks who clicked through thought they were doing what they should.
As regards reprimands - it does sound like a healthy culture, and I've seen and worked in thoroughly rotten environments with an equally rotten security culture.
One example that sticks out, I worked in a utility company where IT would install a laptop desk lock and tie it to your desk. Then security would come by at night a few weeks later, lift the desk and remove the lock and laptop. The kicker was then they would issue a reprimand to the person who owned the laptop for failing to secure it. This was pure security theatre so people could justify their role.
All the ones that actually care about security do this.
If you want people to fix their mistakes and change, you need to encourage them to do so. If you want people to try to hide their mistakes, punish them.
(If you discover that IT has permanently reserved a backup laptop for a particular employee who is perpetually in need of a fresh image, it's time to talk to their manager about why they fall for these things and what other mistakes they might be making.)
I received no less than four Amazon refund scam phone calls today. If only we could have this accelerated timeline for “run of the mill” phishing scams, they would be so uneconomical to run and the volume would decrease significantly… sigh.
As it is I reported the two that didn’t use spoofed numbers to the owner of the number (onvoy/inteliquent as usual). Response time from them is basically infinite, as abuse reports basically go into a black hole.
Per the article, only three of 76 targetted employees were fooled by what can reasonably be described as a best-in-class phishing attack. That's actually pretty good[1], and implies that someone trained them pretty well.
[1] Though surely Cloudflare employees are, by the nature of their business area, going to be a ton more sophisticated about this than median corporate folks.
But surely that makes it even more compelling; it's incredibly good and they still got breached. The promise of hardware tokens is that you can survive even that happening, because humans are and always will be the weak link in the chain and this is an actual mitigation against even that.
My lingering question however is still how come the phishers knew so much about cloudflare, yet missed the critical key (no pun intended): they had 76 phone numbers of CF employees and a plausible call-to-action, plus knowledge about okta usage, but missed the crucial fact that CF uses hardware tokens??
The attackers hit over 300 organizations in less than 48 hours. Cloudflare just happened to be slightly different enough that it broke their automation.
> Cloudflare just happened to be slightly different enough that it broke their automation.
This type of attack just can't work on targets which are properly secured with FIDO authenticators. So it's not really "slightly different". The minimum adjustment the attackers can make is probably something like "Hire motorcycle couriers, add a step where the user is told their token needs replacing, a courier comes out and takes it, we get the token". Which is a very different ball game from "Make some web sites and install this off-the-shelf phishing toolkit".
Fine. But by Cloudflare's own statement it didn't fail because they used webauthn, it failed because they didn't use TOTP.
Think of it like a bank robber showing up to a job to crack a safe with an autodialer. He will have no problems on 9/10 banks that use dial safes, but this one has an electronic keypad. The electronic keypad being better or worse is irrelevant, it protected the bank because the robber brought the wrong tool.
It was mos likely not targeted. The fact that theh forked up the cash for yubikeys is the only thing impressive everything else is standard incident response.
This seems to me like a credential harvesting campaign. Most likeli there is a trojan app that was used which used a list of contacts to spread.
It noted that the employee's families were also contacted, this tells me CF does BYOD for mobile phones.
I make a point out of not using my personal phone for anything work related because of this and many other reasons. Not only should companies pay for and manage employee's work phones, using a personal phone for work reasoms should be disallowed. Work phones can be restricted to not have unapproved apps.
While yubikeys are phish proof, the attackers could habe instead asked users to download an authentication app which would steal cookies to bypass yubikeys by letting them login to the right CF portal but in a trojanized in-app browser.
> We blocked these IPs from accessing any of our services.
> ... we’re tightening up our Access implementation to prevent any logins from unknown VPNs
This requires clarification. I think this means cloudflare is completely blocking some of mullvad's ip addresses for their internal tools (or third-party services they use?) But "all services" could also mean if I run my service on cloudflare, this will affect some of my customers who happen to be using these mullvad nodes.
If it's the latter, I understand the immediate concern is the phishing attack, but on a larger level this is concerning since I can't see how this response would mirror to cloudflare's own vpn offering. If an actor were to use cloudflare's vpn offering for nefarious purposes (directed at cloudflare or someone else,) I don't see cloudflare implementing this policy aimed at their own offering.
We’re tightening up controls and not letting any non-whitelisted/approved VPNs access any of our internal tools. This has been a feature of Cloudflare Access. We just hadn’t enabled it ourselves.
Impressive timeline! In many orgs going through all logs, let alone doing reconnaissance on attacker infra, seizing domains, and sharing threat intel with other companies because of such mediocre attack is out of question.
> every employee at the company is issued a FIDO2-compliant security key from a vendor like YubiKey. Since the hard keys are tied to users and implement origin binding, even a sophisticated, real-time phishing operation like this cannot gather the information necessary to log in to any of our systems.
This is why WebAuthn needs to become way more popular.
I am shocked to see that all the "two-factor authentication" approaches don't bother to mention the action that is being authenticated.
People often are conditioned to simply enter their passwords and their two-factor authentication into a window just because they think they are dealing with an official representative.
If you think you're immune, think of how many times you've entered your information into Plaid, which was displayed as a mere iframe on a website! How did you know it was Plaid? Because you trusted the enclosing website? And for that matter, why do you trust Plaid?
People get phished all the time by having a "bank rep" call them on their phone and have them read back these numbers, claiming that a different action is about to take place than the one being confirmed. All that could be EASILY stopped by the banks if they just added the action that's about to be confirmed. But somehow this hasn't happened in any of the incarnations.
PS: I think some places in Europe do mandate it but not USA at all!
Webauthn does cover the scenario described by associating the credentials in the hardware token with the domain name, so that you're not entering credentials into a random site or iframe; the browser and token are verifying that the URL requesting a challenge is the same as the one that generated the credentials in the first place.
Showing the action as part of the dialog doesn't really solve the threat posed here, because if you're not validating where you're answering a challenge from, then you also don't really have a way to know they're not lying about the action presented.
All that the second factor mode of WebAuthn does is "This still me". Who? Me. That's all. The token intentionally has no idea who "me" is, nothing is preserved from one such interaction to another - it only knows that it's still the same token as it was before. This makes the technology privacy preserving yet is ideal for the specific scenario we care about - proving I'm still me. I just typed my username or email or whatever into a web site, so the web site already knows who I claim to be on that web site, they're just checking that in fact "this is still me" whoever that is.
You can leverage this technology to do more, including your "action that is being authenticated" stuff, but this adds complexity we mostly don't need. It turns out we can get a really long way with just absolute certainty that "This is still me".
To really do this right, the yubikeys would need to have some kind of display to see what action you're confirming. Slightly better than nothing, the OS/browser UI could show it.
Something like Plaid is unfixable though, that's just a garbage heap of insecure patterns. I refuse to use it.
Something I was disappointed (but not surprised) to never see take off outside a few niche areas and closed systems like payment card terminals was personal smartcard readers.
Even reasonably affordable ones commonly have a 16x2 or 16x4 LCD screen. While today's protocols and drivers don't inherently tie together the data being signed with a string shown on the screen, appropriate design of protocols could enable this - then you'd have a hardware reader with PIN pad, where your PIN isn't seen by the computer, and with a screen showing you exactly what domain you are logging into, or which action you're approving.
You can implement webauthn on a smartcard just fine as well (there are open source applets for it I believe) - just a shame that hardware readers with trusted displays never really took off on desktop PC! Then again for "just login" like in webauthn, really the domain is the only thing seen. A rogue local browser app can prompt you to authenticate for an arbitrary domain, but it's challenge/response based so the attack needs to happen in real-time.
Which is why it's so unbelievably stupid that the power to make unphishable systems is being arbitrarily tied to hardware keys and secure enclaves when it could just be an app on your system, deployed for free instantly across a whole fleet of machines that manages keys behind the password protected full disk encrypted systems they already have. "Oh but then someone who compromises the user loc... sssh the phishing protection alone is worth it."
I know I'm talking about software authenticators, but the whole "movement" if you wanna call it that seems to be openly hostile to them to the point of implementing DRM via attestation because there's money to be made in selling new disposable plastic thing.
> the power to make unphishable systems is being arbitrarily tied to hardware keys and secure enclaves
Its not though. First of all, windows, android, macos and ios all support being used as platform authenticators. No need to use an external hardware key to get the benefit. This is what most people should use (really most people should use passkeys
tied to your phone once those become widely available).
I don't really understand your complaint that those implementations are tied to secure enclaves and TPMs. Every laptop issued by your company already has one of these in them. Why not use them?
I get that there's a fear that TPMs somehow enable DRM. Given that TPMs have been around for 20 years and haven't been used for DRM applications I think thats a bit overblown. But even if you do believe that, I don't see how you can conclude that using webauthn with a key protected by your TPM somehow enables DRM.
If you are really morally opposed to using a FIDO device that stores keys in protected hardware, go ahead and run a soft FIDO token! I wrote a software authenticator for linux that uses the TPM[1], but it also has a mode where it just uses keys stored in memory. There are other good software FIDO implementations[2]. These authenticators work on basically* every site that supports webauthn. Use them, they are still going to be much better than using SMS or TOTP factors.
*It used to not work on vanguard.com but that changed when they upgraded from the old u2f APIs to the webauthn API. It also doesn't work for one enterprise site I use for my job, which checks attestation certs to ensure the key is one that was issued by the company and is FIPS compliant.
Attestation certs makes sense for a small subset of enterprise usecases and don't make sense for consumer sites. _In practice_ that is also how WebAuthn is deployed. None of Google/Facebook/Github/Twitter/AWS require an attestation to register. _In practice_, WebAuthn is not a threat to your freedom.
> Cloudflare built our secure registrar product in part to be able to monitor when domains using the Cloudflare brand were registered and get them shut down.
That's pretty disturbing. What about fair use? What if someone wants to register cloudflaresucks.com, for example? Or what if they have a name that happens to have "cloudflare" in it (such as oortcloudflares.com, where one would presumably have information about flares of ... something... in the oort cloud)?
I get wanting to stop attacks like this one, but I'd hope they don't have the power to just indiscriminately take down other people's web site just because the name is one they don't like.
Whether you like it or think it's unfair, this was litigated long ago, and Cloudflare's actions are no different than any other big brand hiring a company to search registrars for thier name.
It's worth noting that cloudflaresucks.com and similer *sucks.com names have been protected by the courts in the US as non-infringing. Names like Cloudfare.com or C1oudflare.com probably can be taken down, particularly if they represent themselves as actually being cloudflare.
Well first, I'm not (yet) concerned with the courts in this hypothetical. I'm only talking about CloudFlare's ability to abuse its position. Regardless of whether it's legal, is it right that they can do this?
Second, trademarking a name doesn't give complete ownership of that word to the trademark owner. WordPress can still exist, even though Microsoft owns the trademark for Word as a word processing application. The example I gave of oortcloudflares.com (or maybe less ridiculously something like micloudflares.com where it's about loud flares of sound coming from microphones) shouldn't infringe on their trademark in any way since they have nothing to do with networking. But again, all of that is irrelevant if they can just throw their weight around and crash any website with a name they don't like. Many individuals and small businesses wouldn't even have the money to take them to court.
>>I'm only talking about CloudFlare's ability to abuse its position.
Yes, I felt an eyebrow raise a bit when I read about Cloudflare getting the site taken down. It is a minor super-power.
That said, any power can be abused, but that does not mean that it will be abused, or that the potential for abuse means that we should eliminate that power.
In the case of CloudFlare, they so far seem like a responsible infrastructure provider, with a business model that aligns their incentives with being trustworthy, so I'm OK with it.
In contrast, I wouldn't even consider trusting any Meta/FB org with anything like this power, as their obvious behavior has been massively untrustworthy, and their business model makes it easiest to be untrustworthy.
The question is whether we are at risk of such power being extended beyond ICANN Cloudflare, etc. and to the likes of untrustworthy players like Meta, Alphabet, etc., and if so, what to do about it?
My point about litigation was sort of about that. You're reminding me of Slashdot in the 90s[1]. You're making good points and I don't disagree with them, but also a lot of this has already been covered, up to and including in the courts, and certainly within icann (et al). There are a lot of rules/agreements in place around how these things play out - it's worth investigating to see for yourself. One positive thing to keep in mind also is that the EFF and other groups keep a pretty close eye on those sorts of shenanigans.
[1] Just to be clear i do not mean this as an insult, just a reference point, these discussions were common on Slashdot back then as all the things you mention either happened or were hypothesized by various "governing bodies" or "agitators" like eff.
I’m not clear how owning a registrar gives them a unique view into domain registrations at other registrars. Does being a registrar come with special privileges like programmatic unmasking of “whois protection?”
They also already have a source of DNS traffic to mine for new registrations. Surely a phishing site would show up there fairly soon after registration. So does being a registrar come with more benefits than just timing?
If it does, then shouldn’t the worry be more broadly about Cloudflare potentially abusing this system to deanonymize the owner of _any_ domain, not just those they deem trademark infringing?
You get a feed of what’s registered at a velocity that’s not publicly available.
In addition to that, controlling the registrar also allowed us to take advantage of features like registrar and registry locks managed by policies we had full control of. Auditing the policies of other registrars, even the supposedly secure ones, freaked us out enough to spend the money to build our own.
Thanks for the reply, and a day later at that :) You really do set a great example.
I’m still not clear though, does that feed include Whois protection, or is the real identity of each registrant available to you even if the domain was registered elsewhere?
I understand your slippery slope argument and can agree on some level, however the law often does litigate on these grey areas based on intent.
The intent of the attacker in this case was extremely evident- the page served at that domain clearly used cloudflare’s trademarked logo in an attempt to fool the user into thinking this was an official cloudflare page. The intent to deceive is clear.
In your case, I would imagine your page would avoid making itself appear as official cloudflare marketing material. In fact, I would imagine you would take pains not to be confused with the official cloudflare site. So it would be pretty clear the intent is not to deceive and you would have a very good leg to stand on if they tried to take down your domain.
That's actually pretty standard for copyright infringement - and this goes well above copyright infringement. Impersonating someone to steal or defraud a company or person is definitely illegal, and seizing the assets used to commit fraud is a reasonable response.
If there were grey areas (e.g. the cloudflaresucks.com example given above, or a parody), I would get the concern. But this seems pretty cut and dry
The concern isn't about the matter itself, it's about companies seizing private property without having to go through the court system. Seizing the domain is likely a reasonable response, but why exactly should businesses be allowed to play judge and enforce the law like that?
Since the hard keys are tied to users and implement origin binding, even a sophisticated, real-time phishing operation like this cannot gather the information necessary to log in to any of our systems.
I'm struggling to figure out what this means and how it works. A cursory Googling didn't help.
Keys tied to users -- during registration, an authenticator generated a new ECDSA key locally, and offloads it to the PC, to be sent to the server. The public component is exported in plaintext and is the verifier data. The private component is sealed by symmetric encryption using a key held only on your device. Part of the sealing includes the canonical representation of the domain, protocol and port for the site in question.
Origin binding -- When you visit a site you used before, the site sends you the encrypted blob offloaded private key. Your token verifies it and decrypts it, and signs a challenge using the key and sends that back to the server as a response, to be verified using the stored public key. Part of this unsealing process ties it to the domain, protocol and port of the origin site making the request, so if a phishing site acts as a relay, your token won't generate a valid response, since the phishing site origin differs and the offloaded private key won't decrypt.
If you look at the threat model here, if your token and its implantations of crypto are secure, you really need to get rogue software onto the client device, which can send arbitrary requests over USB to your security token device, and trick the user into proceeding. At that point it's "game over" as you have code execution on the endpoint and there's other ways to achieve your goals from there.
I'm not familiar with FIDO2's changes, but FIDO1/U2F stuff.
What happens is the browser signs the request to the security token with the website URL. The security token's response based on the secret includes this as a result. So if you try to login to d0main.com instead of domain.com and get a U2F 2nd factor auth, the browser will generate a request based on d0main.com. Even if the malicious site, d0main.com, copied/repeated the public key info for the second factor auth request from domain.com, the browser would hash and sign the request as coming from d0main.com as a result. The resulting login token would not match up with what domain.com expected. The login would fail.
So the beauty of this protocol is that, by having the browser be a trusted intermediary, you have to login to the correct website for the 2-factor U2F token to create the right response. You cannot be phished unless you can trick the browser as well into signing your malicious site's request with the real domain/url.
This does not work with 'enter a code', sms, or those apps that prompt you to say you approve of the login. All of those are subject to an attack where the legitimate login system gets proxied by an attacker.
FIDO2 can optionally support "usernameless" login. After enrolling, your authenticator can keep all the details it needs to authenticate you somewhere, and then "logging in" becomes go to the right site and use your fingerprint or whatever on the authenticator. No step where you type in an email address or anything like that, you are you and your authenticator knows exactly who that is.
With FIDO1 nothing lives on the authenticator. If I drop my authenticator on the subway, and you find it, unless you somehow guess my identity and try to log in as me, the authenticator gives you no clue (I mean if I wrote my name on it in Sharpie that would tell you, but the technology itself doesn't) and you can just use that authenticator, might as well, free authenticator. Probably a bad idea if you're Edward Snowden but for Mr Average it's safe.
But with FIDO2 your actual credentials can live on the authenticator, which means it knows (in some sense) who you are. On the other hand this mode should be protected with a second factor (since the authenticator itself is no longer the second factor) such as a fingerprint sensor, or maybe a PIN lock.
You got the effect of FIDO1 right, but the mechanism is a bit cleverer. On a cheap device this is relying on Authenticated Encryption. When a site says "Prove you're still you" to a FIDO1 authenticator it needs to provide a huge "Identifier". Well, for the cheap devices (maybe not an iPhone, but say a Yubikey) that "identifier" isn't really just a very large index into a table, they've only got a tiny amount of flash. Instead it's actually your private key for that website encrypted by the authenticator using its own symmetric key, and the AE is used to confirm it is the correct website. If it wasn't the key doesn't decrypt and that's exactly the same scenario as if you plugged in the wrong authenticator, a thing people do all the time.
From the authenticator's point of view, suppose you've got a nice red authenticator and also a blue one. You have enrolled the blue one at Facebook but not the red. Let's see three scenarios
1a. You forget and try to use the red authenticator to sign in to the real Facebook
1b. Facebook says prove you're still some-huge-ID-made-by-the-blue-token-for-Facebook
1c. Your red token tries to decrypt the ID, using its own symmetric key, and the knowledge this is for facebook.com, the Authentication Encryption says... No.
1d. Your browser says nope, try a different token?
2a. You are being phished and try to use the blue authenticator to sign in to Fakebook
2b. Fakebook sends over the identifier from the real Facebook site, some-huge-ID-made-by-the-blue-token-for-Facebook
2c. Your blue token tries to decrypt the ID, using its own symmetric key, and the knowledge this is for fakebook.com, the Authentication Encryption says... No.
2d. Your browser says nope, try a different token?
3a. This time you remembered to use the blue authenticator to sign in to the real Facebook
3b. Facebook says prove you're still some-huge-ID-made-by-the-blue-token-for-Facebook
3c. Your blue token tries to decrypt the ID, using its own symmetric key, and the knowledge this is for facebook.com, the Authentication Encryption says... Yes.
3d. Your token can sign an "I'm still me" message for Facebook using the Private Key it got back with the Authenticated Encryption. You get into Facebook.
To support all this, the remote sites should keep a list of up to say half a dozen authenticators you've enrolled, and it should hand over the full list on each sign-in. "Do you have any of these?". Your browser polls each authenticator with each ID, "Hey, authenticator, is this one yours?" if any of them say Yes, we're done, proof provided. Otherwise, tell the user none of the tokens worked, do they have others?
For enrollment a similar strategy is used. The site hands over all the IDs it already has and the browser says "Hey, do you recognise these IDs?" if an authenticator recognises one of the IDs then we enrolled that one already, look for one that doesn't recognise any IDs, that one should be enrolled.
https://webauthn.guide/ has a lot of detail if you want to go into it, but what they're referencing here is the "rp" or "relying party" field that's used as part of the credential setup. This uses browser APIs to share the origin of the credentials stored with your hardware token to guarantee that your private key is only used when visiting that same service. This is as opposed to TOTP as a second factor where you're relying on the user to verify that they're not putting it into a phishing website.
enter username/password to domain.com, it then demands domain.com yubikey authn token. yubikey provides only if on domain.com
login to fakedomain.com
enter username/password to fakedomain.com, it then demands domain.com yubikey authn token. yubikey does not provide because not on domain.com. or it demands fakedomain.com token, but yubikey doesn't have it
so, in that way, yubikeys (and similar) are immune to phishing, unlike say google authenticator
Whilst hardware keys are clearly better, I appreciate how storing TOTP keys on bitwarden also largely mitigates this kind of phishing. Bitwarden also checks the domain name, so if you don’t see the auto-login option (with or without 2FA), you should be concerned and not proceed. I know that storing the TOTP keys in Bitwarden kinda reduces the second factor to one factor. But practically I would argue that this kind of phishing is a bigger risk than someone hacking Bitwarden itself. For small orgs without security teams and resources, promoting password manager usage and including TOTP keys is a smart move.
In theory this seems like it could be enough, but in practice there will be sites where they change the DNS name and so you must override, and because that feature exists some fraction of phishing targets will override.
The crucial trick in WebAuthn is there isn't an override button. There is no "fall for the phishing scam" button, so there's no way to push it.
Is it possible to use a Dell contact smart card reader for web 2FA? Preferably in some way that it can be cloned if they lose their card.
We’ve got contact readers in all the laptops, and our door systems can read them too, but not sure how to deploy them for websites and such.
You want to implement security for even the least technical role (ie reception), but you also don’t want to give everyone a keychain full of dongles or install half a dozen computer plugins.
If you deploy and maintain your own fleet of cards then you could do this. There's an open source FIDO2 applet implementation at https://github.com/martinpaljak/FIDO2
You could also use PIV/PKCS11 client certificates if it's for internal systems you run - there is reasonably good support for using client certificates in popular browsers from a smartcard, as this is used for DOD CACs.
If you visit the site now you'll find that Google has found the site to be phishing and would show a warning in browsers (Chrome, Safari, Firefox). I'm really curious at what time Google could have detected it, since it was targeting Cloudflare employees.
It’s scary that porkbun just gave away the domain like this. What if cloudflare was actually the threat actor? Could they just steal domains like this?
They should’ve not cooperated. Instead, cloudflare should have had to contact verisign - the actual owner of .com.
They are presumably smart enough to distinguish cloudflare from an actual threat actor. Collaboration between infosec teams is very common, and people in the infosec world often have working relationships which survive over multiple periods of employment, so impersonation is less likely.
when they say "scan the web" for Cloudflare targeting websites, does that mean they wrote a webcrawler that is constantly trolling for new Cloudflare targeting content?
That sounds to me like a very healthy culture. I wonder how many other organizations are similar?