The followup tweet indicates that the CPU has to be in an unlocked state before this is possible, which on a typical system requires there to be a Management Engine vulnerability first. Given what we currently know, this is going to be interesting for people interested in researching the behaviour and security of Intel CPUs and might well lead to discovery of security issues in future, but in itself I don't think this is dangerous.
Edit to add: https://www.intel.com/content/dam/www/public/us/en/security-... is a discussion of a previous Intel security issue which includes a description (on page 6) of the different unlock levels. This apparently requires that the CPU be in the Red unlock state, which (in the absence of ME vulnerabilities) should only be accessible by Intel.
Sure, which is why this is useful to researchers. But the access someone needs to your system in order to exploit the ME vulnerabilities is sufficiently extreme that if someone achieves it you probably have other things to worry about.
I can't get with justifying one serious security problem using another. Attackers increasingly combine local privilege escalations to move laterally, but only need one RCE to get in.
Windows can be booted into test keys mode, which allows the loading of unsigned drivers. We don't consider that a security issue because the privileges you need in order to switch to that mode are equivalent to the privileges you get by switching to that mode. It's the same here - the ME is the root of trust on Intel platforms. If you're in a position to execute arbitrary code on the ME then you've already got the ability to compromise the rest of the system enough to run arbitrary code on the host CPU, and being able to modify microarchitectural state doesn't give you additional privileges.
I’m not saying that’s the case here, but that’s the general problem with the line of reasoning that “hey if you already have permission X then doing Y is the least of your concerns”.
The concept of defense in depth literally relies on each barrier being independent and robust. That’s why you see hardening of Linux’s hibernate even though the common refrain is “well if you have physical access the game is lost”. There are things that even root can’t do even though “hey if you have root the game is lost”. The point of the game is to never lose even in very adverse environments.
The assumption on Intel is that there are no barriers once you're in the ME. You can't defend against a hostile ME. The security model is already violated. Maybe there should be a barrier between the ME and the CPU, but as can be seen here Intel feel that the ME should be in a position to put the CPU in debug mode so shrug.
- Police confiscate your laptop on some bogus pretext, then return it to you saying you're free to go.
- You open the laptop and find nothing that shouldn't be there. You wipe it, reinstall the OS and continue using the laptop.
- Surprise! The CPU now works for the police, so after some time it installs a rootkit or whatever.
Dunno if the microcode is big enough to do this kind of attacks, and perhaps some other firmware is easier to program.
But if someone waves this off saying that's not how police works in the US, well the world is larger than the US and it all definitely happens in other countries, only without CPU rootkits so far.
Situation without this CPU feature: Cops compromise the ME, disable Boot Guard, compromise your firmware, backdoor your OS directly
Situation with this CPU feature: Cops compromise the ME, disable Boot Guard, compromise your firmware, backdoor your CPU so it can later backdoor your OS
There's not really a meaningful difference between these! If there's an exploitable ME vulnerability then the police can absolutely own your system in an undetectable way regardless of whether or not this feature exists. If we were in a different universe where the ME enabled whether or not the CPU was in debug mode but wasn't responsible for any other security features then we'd care about this a great deal more, but as long as compromising the ME already gives you a way to permanently backdoor the system it's doesn't make any real difference.
There are more productive ways to view the probabilities of security. Low probability may imply low risk but is not guaranteed to imply low priority to fix.
No. me_cleaner reduces the amount of code running on your ME, and as such reduces the attack surface presented by the ME. But anyone with physical access (which is required for the interestingly exploitable ME vulnerabilities) is in a position to just put whatever ME firmware they want on your system.
This seems like yet another thing on the list of “x86 hardware issues that sound worse than they are”.
I’m interested to see what people are able to reverse engineer with these sorts of tools. It wasn’t even that long ago that ucode wasn’t even encrypted with integrity. I don’t think AMD started doing that until around 2010.
I’m also curious which hardware versions this works on, since it’s not obvious it’s universal. I’ll be amused if it’s some forlorn low power chip from 10 years ago.
>It wasn’t even that long ago that ucode wasn’t even encrypted with integrity
whether they're encrypted or not doesn't really matter. what actually matters is whether they're signed or not. There was a talk given in 2017 about trying to modify the microcode in AMD processors, but they were using processors from a decade ago (AMD K10, introduced 2007). That makes me think that processors made in the past decade are probably using signed microcode.
Yeah, although I didn't find the original paper. Reading into it more, they mention when AMD and Intel started signing their microcode.
>Note that Intel started to cryptographically sign microcode updates in 1995 [15] and AMD started to de-ploy strong cryptographic protection in 2011 [15].
This would still break SGX/remote attestation, no? The chip can correctly say it's running some piece of assembly but if "ret" has been redefined to do whatever I want...
So you mean, if I am a state actor able to kidnap the child of an Intel high level employee... say I m Joe Biden, I can ask Intel to... remote unlock my CPU and read arbitrary memory block ?
Or you mean Intel had to physically handle your CPU with a debug cable or whatever ?
Cause I really dont feel it s okay that the only safety we have from a newly discovered exploit is that there needs to be another newly discovered exploit :D
It is public knowledge that US intelligence agencies actually just hijack computers and equipment on their way to the customer and install hardware backdoors there (Snowden et al., 2014).
It is also known that they have had backdoors in commercial systems as they came off the shelf, but I think usually those were CIA owned and controlled companies like the crypto AG phones.
What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor? On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero. So it's hard (for a plebian like me, anyway) to estimate how those costs/benefits might be weighed up by the US government.
>What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor?
There is also a third possibility, that some intelligence agency invested a ton of cash into finding abusable exploits in these systems giving them the same access a backdoor would provide.
Also from the Snowden leaks, we know that they have programs with budgets in the millions into finding similar exploits and that there were similar programs outside Snowden's clearance. And though a bug may cause the same damage to the economy, it wouldn't hurt US prestige in the same way.
If I'm the CIA I have dozens of highly placed agents or at least informants at Intel. Not necessarily placing backdoors, but finding and not fixing exploits and sending them back to the CIA for later use. It would be extremely cheap, hell if I'm China, Russia, the UK, or Israel I'm doing the same thing.
> On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero
If I were a three-letter agency, I'd bribe/blackmail somebody into inserting intentionally vulnerable code. After all, sufficiently advanced malice is indistinguishable from incompetence.
We've often seen that the code inside firmware, secure environments like trustzone, etc tend to lack many of the mitigations for the classic vulnerabilities. Just rewrite one of the ASN.1 parsers in the ME (I'm sure there's at least one), "forget" a bounds check in some particularly obscure bit, and you'd have a textbook stack smash.
How would an OS seed an RNG in the cloud? How would you seed an RNG on a headless server in a VM? What about when that VM is copied, possibly while running, in order to duplicate server functionality? There are vulnerabilities and threats here that your comment does not take into account.
But really, it's not about operating systems not using RDRAND at all - it's fine to use it as one of the entropy sources; what you don't want to do is use RDRAND directly instead of CPRNG.
Remotely? I think Intel would need to produce a backdoored ME firmware, get the system vendor to incorporate that into a system update and then convince the target to flash that. In that sense I don't know that they'd technically need physical access, but it doesn't really meet most people's description of a remote attack.
Edit to add: https://www.intel.com/content/dam/www/public/us/en/security-... is a discussion of a previous Intel security issue which includes a description (on page 6) of the different unlock levels. This apparently requires that the CPU be in the Red unlock state, which (in the absence of ME vulnerabilities) should only be accessible by Intel.