Click through to the paper. I could not be more thrilled; we had been looking for an excuse to publish Cryptopals challenges for differential attacks, and Alex had even written one, but we opted not to because they'd never been valuable to us in a real-world setting. Now we can build a small sequence of them.
I rag on people's crypto projects a lot, but it's worth knowing that almost nothing you could reasonably do with your own crypto could so thoroughly bone you. This paper has three different key recovery attacks from both controlled and partially inferred plaintexts (chosen and known plaintexts, in the jargon). Chosen plaintext attacks are common. Key recovery attacks? Less common. This is masterfully terrible cryptography.
Looking forward to digging into the attacks in actual code.
> I rag on people's crypto projects a lot, but it's worth knowing that almost nothing you could reasonably do with your own crypto could so thoroughly bone you.
Unless you ALSO wrote it in Javascript and dropped it on an HTTP-only webserver ;)
It was "masterfully terrible". Can you predict whether it was intentionally terrible or just incompetence at play? My impression is that the NSA likes their key leaking and similar approaches to be more subtle.
"The Open Smart Grid Protocol (OSGP) is a family of specifications published by the European Telecommunications Standards Institute (ETSI) used in conjunction with the ISO/IEC 14908 control networking standard for smart grid applications."
Not sure who to blame if it is deliberate, but seems unlikely to be NSA, and not just because of the unsubtlety (though NSA was also behind the quickly-spotted Dual EC DRBG).
Even back before the Crypto Wars the NSA usually tried to have their cake and eat it too though, trying to make their crypto breakages to types of breakage that could only be exploited by a state-level adversary like themselves. Leaving 3 different key recoveries available is 2 more than NSA would have needed, and the additional weaknesses simply make it more likely that a first-year CS student would eventually figure out the problems.
Most likely it was created by the biggest SCADA vendors (which will remain unnamed) and know as much about computer security, best practices and modern developments as I know about 17th century Japanese history...
Doubt it's NSA in the sense that they suggested weak crypto, but I do believe NSA is to blame. They are the ones beating their chests how we need MORE CYBERSECURITY to protect our "critical infrastructure from cyberwar and cyberterrorists". And then proceed to:
1) do actually nothing to further security for critical infrastructure
2) push for more surveillance laws in disguise as "cybersecurity laws"
I mean for crying out loud WhiteHouse.com didn't even use HTTPS until people on Twitter started a public shaming campaign against them. THAT'S how pathetic the real "cybersecurity" is in the US. And they have absolutely no real plan to change that right now. Who's going to come up with one anyway? This guy?
There are only 3 pages on security (plus annexes). There is the broken MAC, but the cipher standardized is actually RC4... Without using the constructions that make it less insecure like discarding the first 3kb bytes of output.
Also that little gem : An "invalid digest" response is never encrypted.
Which is likely a textbook oracle.
All of this is completely inexcusable for a specification published in 2012, a lot of it could have been stopped just reading a few Wikipedia entries.
I guess a lot of the devices are going to landfills.
RC4 is meaningfully, materially insecure virtually no matter what you do with it. I don't think you're disagreeing, but the nerdal lobe in my brain won't let me leave the discussion without it ending on that note. :)
I know that it is considered suicidally hubristic to use homegrown cryptography; does that apply to writing your own implementation of a trusted protocol?
It seems like it'd be a cool project, but I don't want my github profile to advertise that I'm the kind of fool that rolls his own crypto.
Writing your own crypto implementation is just like every other instance of reinventing the wheel, but more so. Namely, don't be a dilettante.
It's actually perfectly OK to reinvent wheels. Reinventing wheels is how people learn to build wheels and eventually invent new, never before seen wheels. Where we as programmers get in trouble is that we frequently reinvent a wheel once, and then drive around on it for the rest of our lives.
If you find cryptography interesting, try your hand at it. But don't just code up the first implementation once, slap it on a web app, and use it to protect your customers PII. Don't be a dilettante. Write lots of crypto implementations, and try to find the flaws in them. Read lots of books, read lots of other people's implementations. Whenever a new exploit of one comes out, try to understand it and try to find similar problems in your own code or other implementations (or figure out why a particular implementation doesn't have that flaw). Write more implementations, read more books, talk to other cryptographers.
It's not a crime to be interested in difficult things, but it is important to recognize that difficult things take a certain level of skill and devotion. Each of us has to decide which difficult things we want to devote our time to and which we want to casually watch from the sidelines.
> does that apply to writing your own implementation of a trusted protocol?
As long as your README says that you're playing around with crypto and it's not production crypto, go wild -- nobody will care. Anyone who does isn't worth listening to, playing around never hurt anyone as long as at the end of the day you use real implementations by seasoned crypto developers.
At the simple side of things you can write your own implementation and use known test vectors to verify that your implementation is acting the same as other implementations. That should cover interoperability.
Some of the issues in cryptography implementation are subtle things that can affect it like timing concerns, if you have a time optimization it can be exploited to leak information on the data or the key. This is one thing that a less experienced developer may fall into.
There are other side channel attacks as well that one needs to be cautious about. Optimizations for power usage may also leak information.
Part of writing crypto is to optimize for security on the expense of time and power.
In cryptography, every layer of the stack has a set of possible vulnerabilities associated with it. In the rarest case, primitives (like ciphers and hash functions) are broken; more commonly, the protocol is ill-designed and flawed; but most common of all, the actual implementation itself has security flaws, like side channel attacks.
The issues associated with every layer are considered extremely subtle and tricky to both identify and fix. But I would say this is especially true for implementation attacks, which are not really addressed by cryptographic theory.
So, no, writing your own protocol implementation is not secure, even if you trust the design of the protocol. You are still vulnerable to the trickiest class of security flaws. However, so long as you clearly label your project as "learning only" or "insecure," no one will think worse of you for having your own protocol implementation. In fact, I'd say re-implementing TLS is one of the few ways to become intimately familiar with its internals.
If you feel the need to experiment, that's great - it's a fun exercise. Just don't use it in production, and call your repo something like "brokencrypto" so nobody else tries to use it.
The people that know what they are doing when it comes to cryptography need to be employed as full-time cryptographers to stay with the state of the art.
So the rule should be, "if the person inventing your crypto does anything else for a living, it's going to be full of holes".
And turn off power over a wide area and then start a couple of jammers. That could be very very disruptive but can this attack let you send commands to the meter? Or is it limited to just spoofing the meter itself?
If so I really hope that this can be fixed in the field with a firmware update.
Do we know that the alternative protocols are better? Are we finding so many problems because the protocol was badly designed, or are we finding them because an open protocol is much easier to analyze?
(I suspect it is both badly designed and easy to analyze, but I do wonder about the state of the alternative protocols)
Well, I work for a smart grid vendor (SSNI) and I know for a fact that our protocols are better. We actually use strong primitives, for starters.
I can't understand why they rolled their own MAC, when robust and tiny implementations are rather plentiful. That just seems like a foolish waste of time, frankly. An ATMega can do HMAC-SHA256 for crying out loud. There is no excuse.
There are AVRs that can reasonably "do" SHA2, but many have text size limitations, which motivates vendors to minimize the number of primitives they use (for instance, it might lead someone to use CBC-MAC instead of HMAC).
Also: however capable your hardware is, if you're doing long-range RF (and not GSM), you've got a much more significant space and round-trip constraint to deal with.
In any case, just making a descriptive comment, not a normative one. Broken crypto is broken crypto, I agree. I just have a little bit of exposure to this particular design challenge, and those are two reasons I see people not using "best practices" simple cryptography in these kinds of applications.
There are several layers, for starters, each with its own protected session keys. At each layer, and after derivation (this paper was specifically about recovery of the MAC key so I'll not mention anything else) there are two symmetric keys, one for encryption (AES256, in a number of modes) and HMAC (HMAC-SHA256). Session keys are comparatively short lived, with a life span normally measured in weeks.
The OSGP analysis has attacks on key generation, known-plaintext in padding, and an attack on the OMA digest itself. These don't apply to our protocols because (a) we use a much stronger and standard key generation sequence and (b) HMAC-SHA256 isn't vulnerable to padding attacks (directly) or the reversibility attack.
All that said, I will never claim that we have a perfect implementation, but our devices are field upgradable, they do get upgraded, and we have quite a bit of attention on attacking our devices already. Our customers have been pen testing us for years.
Not the OP, but also can speak for the perspective of some vendors other than Silver Springs (I'm unfamiliar with Silver Springs protocol):
1. Running on an RTOS platform, not Linux, so no VPN builds cleanly
2. Running on constrained platforms for which code space is not available for all the crap that comes with a VPN codebase
3. Operating a very constrained RF protocol in which every bit in every transmitted frame has to be accounted for, so not only is there no room to run something like L2TP/IPSEC, but what crypto can be run is also compromised (this is what makes me worry about "we use strong primitives unlike OSGP").
4. Operating simple protocols with simple service models in which lots of round trips for session establishment are unworkable.
5. Can potentially get AES working on the platform, but less reasonable to use anything that requires bignum math, which makes Diffie-Hellman a problem; instead just using static keys, or some simple key rotation schedule.
This is true. Also, as someone who also works on network crypto boxes: the main VPN protocols are _really_ nasty.
- OpenVPN requires TLS (you really don't want to use static keying in OpenVPN), which is a total nonstarter: the TLS protocol itself has non-stop problems, is completely infeasible to implement from scratch, and does not have any implementations useful for a high-security embedded device. Also, the data channel protocol is ugly.
- DTLS-based VPNs are, well, DTLS-based (worse protocol than TLS, and still infeasible to implement).
- IPsec is comparatively sane if you limit yourself to statically-keyed tunnel-mode ESP with no internal NAT'ing (NAT-T is ok, though). IKE is a pain (and usually requires certificates, which are very hard to parse correctly), and supporting all the zillions of options is extremely expensive. On the other hand, static keying has many pitfalls and pretty much requires a mechanism to send updated keys. Also, IPsec overhead usually ends up being rather high in practice, and e.g. IPsec/L2TP is even worse.
- In practice, anything that requires connection negotiation requires some surgery to support high-packet-loss scenarios. (Loss-tolerant protocols over statically keyed IPsec work decently well at packet loss rates that break most IKE implementations.)
Also, for any given device, you'd usually prefer to support as few algorithms as possible and/or tailor choose crypto primitives that are well-supported on your hardware.
Finally, building a secure VPN wire protocol just isn't that hard. (Key negotiation etc. is very tricky, though.)
* We really care about on-the-"wire" efficiency. So we like having control over that.
* We actually looked very hard at DTLS, and do support it, but it is still optimized for browsers and not devices and as such carries some thought-baggage that is tricky to work around. So DTLS doesn't carry the bulk of our traffic.
* IPSec turned out to be impossible to support at the back end. Equipment that supports IPSec tunnel termination often is sold in terms of tens of thousands of tunnel terminations per box. We need tens of millions. That scale does not work for us. We do use it in some specific circumstances though.
* Anything that requires TCP is a non-starter as well. Not at all for the same reasons as the QUIC people. The NoTCP joke was really funny to me.
>TLS (you really don't want to use static keying in OpenVPN), which is a total nonstarter: the TLS protocol itself has non-stop problems
I beg to differ. What's we've seen is non-stop problems from implementation mistakes when it comes to all of the extra extensions. Maybe this indicates there would be a market for an opinionated TLS embedded library...
>On the other hand, static keying has many pitfalls and pretty much requires a mechanism to send updated keys.
Static keying issues aren't unique to IPsec, they occur with every symmetric cipher. There is nothing stopping you from having a management tool that rotates the static key based on a master key and the day. Or if you want to send updated keys, you just encrypt them and send them over the IPsec connection using GPG or whatever your asymmetric choice is.
>Finally, building a secure VPN wire protocol just isn't that hard. (Key negotiation etc. is very tricky, though.)
I'm sorry for saying this, but that's what every single person that has implemented a broken protocol thinks. It's true that it's not hard to get one that encrypts and decrypts. The trick is getting one that doesn't leak any information. There is basically know way to know that without an extensive analysis by the entire crypto community.
Look at how long TLS has existed and they are still just finding issues now. RC4 is another example of one that looked pretty good for quite a while and then fell under scrutiny. Do you really think you are smart enough to know everything the entire crypto community knows?
It's really not true that most of TLS's problems are implementation-based. Problems that had nothing to do with implementation:
* CRIME, which conceptually breaks the way TLS wants to handle compression
* POET, which is an oracle attack against the way TLS does CBC
* BEAST, which exploits IV chaining in the TLS CBC construction
* The RC4 flaws break one of the most popular TLS ciphers, which was ill-advisedly included in SSL3/TLS
I could probably go on. For instance, the JSSE Bleichenbacher flaw looks like an implementation issue today, but it's an implementation that fails to perfectly implement a clumsy countermeasure TLS builds in to Bleichenbacher's P1v15 attack which TLS was originally totally susceptible to.
These aren't "implementation extensions"; this is core functionality.
Heartbleed was an implementation flaw. TLS's security record is much worse than just the implementation flaws. You're actually unlikely to experience lots of Heartbleed-like flaws implementing TLS in a safe language, and equally likely to have them implementing something different in C.
Yes, now they will charge by the watt and time of day to reduce peak load. Although, I actually think the main idea is that it will make them aware of outages in real-time so they can recover faster, which one of the factors used to judge publicly traded utilities (also know as CAIDI, customer average interruption duration index).
They want electricity usage to be even, so they don't have to build out capacity for peak usage which is wasted most of the time. So a smart grid would be good for them.
I rag on people's crypto projects a lot, but it's worth knowing that almost nothing you could reasonably do with your own crypto could so thoroughly bone you. This paper has three different key recovery attacks from both controlled and partially inferred plaintexts (chosen and known plaintexts, in the jargon). Chosen plaintext attacks are common. Key recovery attacks? Less common. This is masterfully terrible cryptography.
Looking forward to digging into the attacks in actual code.