Hacker Timesnew | past | comments | ask | show | jobs | submit | da-bacon's commentslogin

Worth reading the comments over on scirate https://scirate.com/arxiv/2603.28627 for how to interpret some the claims.

Thanks! This summarizes it

> Overall, the work lacks a self-consistent and transparent accounting of resources, making its central claims difficult to substantiate and leaving a strong sense of sensationalism and hype, rather than honest scientific exposition.

"Clowns to the Left of Me, Jokers to the Right"


You are being disingenuous with your selective quoting;

Here is what the authors actually say w.r.t. the criticisms (all the comments are worth reading);

Our primary emphasis is ECC-256. Elliptic curve cryptography is widely deployed in modern systems, e.g., internet security and cryptocurrency.

For ECC-256, the space-efficient architecture uses 9,739 qubits with < 3-year runtime, the balanced architecture uses 11,961 qubits with < 1-year runtime, and the time-efficient architecture uses ~19,000 qubits with ~52-day runtime (or ~26,000 qubits with ~10-day runtime using higher parallelism). Space and time overheads are reported together within each architecture, not mixed across regimes.

The claim that our scheme requires 117 years selectively cites RSA-2048 under the most space-constrained architecture, which is one corner of a trade-off space we present clearly in Figure 3 of the work. We include RSA-2048 for completeness, and state explicitly that its runtimes are one to two orders of magnitude longer.

We believe our clearly labeled trade-offs constitute exactly the transparent resource accounting the commenter calls for.

Best regards,

Maddie, Qian, Robert, Dolev


The book "Mr. Wilson's Cabinet of Wonder" about the museum is a good read: https://en.wikipedia.org/wiki/Mr._Wilson%27s_Cabinet_of_Wond... Recommend reading it after visiting, don't want to spoil the first journey into the Jurassic.

Agree that “Stella Maris” is amazing for this deep engagement with art. Perhaps in a similar vein I do think there are a couple of other books that do this . One is Anathem by Neal Stephenson, which is similar in that foundations of math makes an appearance. The other is “The Weyl Conjectures” by Karen Olson, which captures what it’s like to really do mathematics. Highly recommend both.


For 2, I don’t think you can break ties however you like because this would give you random left or right associativity https://en.m.wikipedia.org/wiki/Operator_associativity For example 2-4-7 would be either (2-4)-7 or 2-(4-7), depending on how you broke the tie.


When I was a high school student, I read “Artificial Life” by Stephen Levy and got really into alife. The book had snapshots of a daughter of Codd’s CA, Langston’s loops, on its book sleeve. I was able to back up some of the rules and then deduce what the others were to repro this CA. I still chase that feeling I got from doing this.


https://scottaaronson.blog/?p=8525#comment-1997424

“Gil Kalai #23: So we’re perfectly clear, from my perspective your position has become like that of Saddam Hussein’s information minister, who repeatedly went on TV to explain how Iraq was winning the war even as American tanks rolled into Baghdad. I.e., you are writing to us from an increasingly remote parallel universe. The smooth exponential falloff of circuit fidelity with the number of gates has by now been seen in separate experiments from Google, IBM, Quantinuum, QuEra, USTC, and probably others I’m forgetting right now. Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same. And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”

Ouch.


Hi Dave, nice to see you. Our quantum computer discussions go back to 2006 and as a member of the Google team you can certainly tell us about your perspective and personal angle if you were involved in one of the two recent assertions.

It is disappointing that you endorse Scott's uncalled for and a little juvenile analogy. I think it is a wrong analogy weather I am right or wrong (both on the general question of quantum computation and on the specific question of my evaluation of the Google supremacy efforts).

In any case here is my response to Scott's comment:

"Hi everybody,

1) I found the analogy in #39 offensive and inappropriate.

2) As I said many times, I don’t take it as axiomatic that scalable quantum computing is impossible. Rather, I take the question of the possibility of scalable quantum computing as one of the greatest scientific problems of our time.

3) The question today is if Google’s current fantastic claim of “septillion years beyond classic” advances us in our quest for a scientific answer. Of course, we need to wait for the paper and data but based on our five-year study of the 2019 Google experiment I see serious reasons to doubt it.

4) Regarding our claim that the fitness of the digital prediction (Formula (77)) and the fidelity estimations are unreasonable, Scott wrote: “And, far from being “statistically unreasonable,” this exponential falloff is precisely what the simplest model of the situation (i.e., independent depolarizing noise on each qubit) would predict. You didn’t predict it, because you started from the axiom that quantum error-correction had to fail somehow—but the rest of us, who didn’t start from that axiom, did predict it!”

Scott, Our concern is not with the exponential falloff. It is with the actual deviations of Formula (77)’s predictions (the “digital prediction”) from the reported fidelities. These deviations are statistically unreasonable (too small). The Google team provided a statistical explanation for this agreement based on three premises. These premises are unreasonable as well and they contradict various other experimental findings. My post gets into a few more details and our papers get into it with much further details. I will gladly explain and discuss the technical statistical reasons for why the deviations are statistically unreasonable.

5) “Yes, IBM’s gate fidelity is a little lower than Google’s, but the exponential falloff pattern is the same”

Scott, do you have a reference or link to this claim that the exponential falloff pattern is the same? Of course, one way (that I always suggested) to study the concern regarding the “too good to be true” a priori prediction in Google’s experiment is to compare with IBM quantum computers."


>That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.

Not sure why you would say that? This sort of exponential suppression of errors is exactly how quantum error correction works and why we think quantum computing is viable. Source: have worked on quantum error correction for a couple of decades. Disclosure: I work on the team that did this experiment. More reading: lecture notes from back in the day explaining this exponential suppression https://courses.cs.washington.edu/courses/cse599d/06wi/lectu...


Yeah me too. I wrote a post about why it hurt so much to lose this place along similar lines https://dabacon.org/pontiff/2024/08/16/requiem-for-the-livin...


Maybe I’m the exception but I went maybe 30 or 40 times. There was so much joy in sharing my childhood with my child. Also the small gift shop had someone who knew their obscure technology history book, I must have bought 10 books from that shop.


Edited

Wait this is a different John Bell (https://publish.uwo.ca/~jbell/) than the Bell Inequalities (https://en.wikipedia.org/wiki/John_Stewart_Bell). But strangely that John Bell has also worked on quantum foundations (looks like quantum logic and contextuality).


Oops my bad!! Thanks for the point-out! My rampant pareidolia must have filled in the 'S'.

(Not a defence, I was just recently thinking about the Bell Ineqs in terms of the Grothendieck Ineqs and this popped up in my feed)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: