Quantum computers don't break SHA256, nor would this attack be "reasonably attributable" to a SHA256 break.
In fact, if you have funds in a wallet that has never spent a transaction before (only received), it's still reasonably difficult for a CRQC to steal your funds. The trick is, the moment you've ever spent a transaction, now your public key is known (and therefore breakable).
(Yes, I'm aware of the literature on quantum search vs hash functions, but it's not a complete break like RSA or ECC.)
Getting a crypto module validated by FIPS 140-3 simply lets you sell to the US Government (something something FedRAMP). It doesn't give you better assurance in the actual security of your designs or implementations, just verifies that you're using algorithms the US government has blessed for use in validated modules, in a way that an independent lab has said "LGTM".
You generally want to layer your compliance (FIPS, etc.) with actual assurance practices.
And the people who repeat such statements uncritically to their reports will also get mildly annoyed when they have no Earthly clue what that actually means.
True, but DNSSEC doesn't need to worry about forward secrecy and it doesn't need quantum protection until someone can start breaking keys in under a year. Hopefully we will find more efficient PQC by then.
> People stopped caring about ulta-low latency first connect times back in the 90s.
They did? That's certainly going to be news to the people at Google, Mozilla, Cloudflare, etc. who put enormous amounts of effort into building 0-RTT into TLS 1.3 and QUIC.
I did a large data analysis of DNS caching times across the web. Hyperscalers are the only ones who care and they fix that with insanely long DNS caching.
I'm not trying to just nitpick you here, but, the message I was responding to said "People stopped caring about ulta-low latency first connect times back in the 90s.".
It seems to me that you're saying here that (1) the hyperscalers do care but (2) it's under control. I'm not necessarily arguing with (2) but as far as the hyperscalers go: (1) they drive a lot of traffic on their own (2) in many cases they care so their users don't have to.
Sorry, the point I was trying to make is that this isn't a problem operationally.
Hyperscalers go to crazy lengths because they can measure monetary losses due to milliseconds of less view time and it's much easier when they have distributed cloud infrastructure anyway. But it's not really solving a problem for their customers. At least when I worked in DNS land ... latency micro-benchmarking was something of a joke. Like, sure, you can shave off a few tens of milliseconds, but it's super expensive. If you want to reduce latency, just up your TTL times and/or enable pre-fetching.
As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too. DoH also introduces latency, yet people aren't worried about that being a deal killer.
> As a blocker for DNSSEC ... people made arguments about HTTPS overhead back in the day too.
They did, and then we spent an enormous amount of time to shave off a few round trip times in TLS 1.3 and QUIC. So I'm not sure this is as strong an argument as you seem to think it is.
> DoH also introduces latency, yet people aren't worried about that being a deal killer.
The engineering effort! ECC solves the theoretical concerns around latency anyway yet we have people arguing that it shouldn't be done. But if it was worth making HTTPS faster to secure HTTP, why not DNS?
You're not going to find this answer satisfying, I suspect, but there are two main reasons browsers and big sites (that's what we're talking about) didn't bother to try to make DNSSEC faster:
1. They didn't think that DNSSEC did much in terms of security. I recognize you don't agree with this, but I'm just telling you what the thinking was.
2. Because there is substantial deployment of middleboxes which break DNSSEC, DNSSEC hard-fail by default is infeasible.
As a consequence, the easiest thing to do was just ignore DNSSEC.
You'll notice that they did think that encrypting DNS requests was important, as was protecting them from the local network, and so they put effort into DoH, which also had the benefit of being something you could do quickly and unilaterally.
I'm not unaware of this and I agree that WebPKI has greatly reduced global risk. New DNS tech takes a lot longer to implement but that doesn't mean we should kill DNSSEC support like the trolls insist upon!
Why would Let's Encrypt not also be interested in safeguarding DNS, SSH, BGP, and all the others? Those middle boxes will have to get replaced someday and we could push for regulation requiring that their replacements support DNSSEC. These long-term societal investments are worth making and it would enable decentralized DNS.
I'm also concerned that none of this will happen if haters won't stop screaming, "DNSSEC doesn't do anything but ackchyually harms security!".
(@tptacek: please stay out of this comment thread)
HTTPS solved a bunch of real world threat models that were causing massive security issues. So we collectively put a bunch of engineering time into making it performant so that we could deploy it everywhere with minimal impact on UX and performance.
Somehow they cause these massive security issues without impacting the 95%+ of sites that haven't used the protocol since it became viable to adopt a decade and a half ago.
It's just a very difficult statistic to get around! Whenever you make a claim like this, you're going to have address the fact that basically ~every high-security organization on the Internet has chosen not to adopt the protocol, and there are basically zero stories about how this has bit any of them.
I run a bunch of websites personally. I have ACME-issued TLS certificates from LetsEncrypt. I monitor the Certificate Transparency logs, and have CAA records set.
What's the threat model that should worry me, where DNSSEC is the right improvement?
Probably a one off? Instagram’s e2ee was opt-in from the start- and meanwhile Facebook Messenger is now “e2ee for everyone” and none of this is affecting the main e2ee messaging apps people use - WhatsApp, Signal, and iMessage
But, yeah, anti-fingerprinting is still a useful signal if less people do it. So more people should do it; especially if they're less likely to be targeted.
i feel like this is the same as voting independant. it's the right idea in theory, but given the fact that 99% of people don't do it , righteousness is decreased. in this case very literally as having a unique fingerprint is entirely counter intuitive to the idea of privacy
I really want to be in a world where that's true. in the meantime we live in a zero sum survival of the fittest game where the powerful execute the weak for insubordination. in this world it is often necessary to take roundabout paths to reach the objective.
for example, a constitutional representative in my country attempted to place restrictions on unfettered gambling advertisements. a single day later, photos emerged of that politician having dressed as a nazi for a costume party in his youth. that politician stood up for what was right and then got fired for it, by losing his job and his status in the court of public opinion, effectively achieving no change.
exacting change isnt always such a simple process as embodying the end result.
Maybe I'm only noticing the times when it messes things up, but it kinda seems like these auto-edits cause a lot of confusion that could be avoided if they were shown up-front to submitters, who would then have the option to undo them.
Or maybe judicious use of an LLM here could be helpful. Replace the auto-edits with a prompt? Ask an LLM to judge whether the auto-edited title still retains its original meaning? Run the old and new titles through an embedding model and make sure they still point in roughly the same direction?
oh interesting, TIL I can go edit my submission titles! That's useful, I've definitely submitted stuff and gotten a less-good title due to the automated fixes, so I'll have to pay attention to this next time
Quantum computers don't break SHA256, nor would this attack be "reasonably attributable" to a SHA256 break.
In fact, if you have funds in a wallet that has never spent a transaction before (only received), it's still reasonably difficult for a CRQC to steal your funds. The trick is, the moment you've ever spent a transaction, now your public key is known (and therefore breakable).
(Yes, I'm aware of the literature on quantum search vs hash functions, but it's not a complete break like RSA or ECC.)
reply