Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

The fact that the EDA companies are garbage in no way mitigates the fact that Google continues to peddle unsubstantiated snake oil.

This is easy to debunk from the Google side: release a tool. If you don't want to release a tool, then it's unsubstantiated and you don't get to publish. Simple.

That having been said:

1) None of these "AI" tools have yet demonstrated the ability to classify "This is datapath", "This is array logic", "This is random logic". This is the BIG win. And it won't just be a couple of percentage points in area or a couple of days saved when it works--it will be 25%+ in area and months in time.

2) Saving a couple of percentage points in random logic isn't impressive. If I have the compute power to run EDA tools with a couple of different random seeds, at least one run will likely be a couple percentage points better.

3) I really don't understand why they don't do stuff on analog/RF. The patterns are smaller and much better matches to the kind of reinforcement learning that current "AI" is suited for.

I put this snake oil in the same category as "financial advice"--if it worked, they wouldn't be sharing it and would simply be printing money by taking advantage of it.



> Google continues to peddle unsubstantiated snake oil

I read your comment, but I'm not following -- or maybe I disagree with it -- I'm not sure yet.

"Snake oil" is an emotionally loaded term that raises the temperature of the conversation. That usually makes having a conversation harder.

From my point of view, AlphaGo, AlphaZero, AlphaFold were significant achievements. Agree? Are you claiming that AlphaChip is not? Are you claiming they are perpetrating some kind of deception or exaggeration? Your numbered points seem like valid criticisms (I haven't evaluated them closely), but even if true, I don't see how they support your "snake oil" claim.


They have literally been caught faking AI demos, they brought distrust on themselves.


Really not sure how you’re conflating product demos which are known to be pie in the sky across the industry (not just Google) with peer reviewed research published in journals. Super basic distinction imho.


>peer reviewed research published in journals

Peer review doesn't mean as much as Elsevier would like you to believe. Plenty of peer-reviewed research is absolute trash.


All of the highest impact papers authored by DeepMind and Google Brain have appeared in Nature, which is the gold standard for peer-reviewed natural science research. What exactly are you trying to claim about Google's peer-reviewed papers?


Nature is just as susceptible to the perverse incentives at play in the academic publishing market as anyone else, and has had their share of controversies over the years including having to retract papers after they were found to be bogus.

In and of itself, "Being published in a peer reviewed journal" does not place the contents of a paper beyond reproach or criticism.


From personal experience: in Nature Communications the handling editor and editor in chief absolutely do intervene, in my example to suppress a proper lit review that would have revealed the paper under review as much less innovative than claimed.


Peer review is not designed to combat fraud.


Well here’s one exaggeration that was pretty obvious to me straight away as a somewhat disinterested observer. In her status on X Anna Goldie says [1] “ AlphaChip was one of the first RL methods deployed to solve a real-world engineering problem”. This seems very clearly untrue- for example here’s a real-world engineering use of reinforcement learning by google AI themselves from 6 years ago [2] which if you use Anna Goldie’s own timeline is 2 years before alphachip.

[1] https://x.com/annadgoldie/status/1858531756506558688

[2] https://youtu.be/W4joe3zzglU?si=mFvZq8gEI6LeEQdC


That is definitely a cool project, but I don't see how it contradicts "one of the first RL methods deployed to solve a real-world engineering problem". "One of the first" does not mean literally the first ever.


Agreed but if someone at your own company did it two years before you in the context of something that recent it’s stretching credibility to say you were one of the first.


I mean, I think second is still "one of the first?" And, no offense to this project, but I don't know of it being used in a real industrial setting, whereas AlphaChip was used in TPU.


Yes. The sky is also blue?

However it's hard to see how being provably 2 years behind the first even in your own company in an incredibly hot area that people are doing tons of work in makes you suddenly second. By that logic I might still be in time to claim the silver for the 100m at the Paris olympics if I pop over there in the next 18 months or so.

I can see you created this account just to comment on this thread so I'm sure you have more inside information than I do given that I'm really not connected to this in any way. Enjoy your work at Google Research. I think you guys do cool stuff. It's a shame in my opinion that you choose to damage your credibility by making (and defending) such obviously false claims rather than concentrating on the genuinely innovative work you have done advancing the field.


Their material discovery paper turned out to have negligible significance.


If so, does this qualify as “snake oil”? What do you mean? Snake oil requires exaggeration and deception. Fair?

If a paper / experiment is done with intellectual honesty, great! If it doesn’t make a big splash, fine.


I think the paper was probably done honestly, but also very poorly. They claimed synthesis of 36 new materials. When reviewed, for 24/36 "the predicted structure has ordered cations but there is no evidence for order, and a known, disordered version of the compound exists". In fact, with other errors, 36/36 claims were doubtful. This reflects badly for authors and worse for peer review process of Nature.

https://x.com/Robert_Palgrave/status/1744383962913394758


>worse for peer review process of Nature.

Every scientist will tell you that "peer reviewed" is not a mark of quality, correctness, impact, value, accuracy, whatever.

Scientists care about replication. More correctly, they care that your work can be built upon. THAT is evidence of good science.


The paper is more or less a dead end. If there is another name you want to call it, by all means.


/[01]{8,}/: I was hoping to have a conversation. This is why I asked questions. Any responses to them?

Looking up the thread, you can see the context. Many of us pushed back against vague claims that AlphaChip was "snake oil". Like good engineers, we split apart the problem into clearer concepts. The "snake oil" proponents did not offer compelling replies, did they? Instead, they retreated to irrelevant points that have no bearing on making sense of the "snake oil" claim.

Sometimes technical people forget to bring their "debugging" skills to bear on conversations. There is a metaphorical connection; good debuggers would disambiguate terms, decompose the problem, answer questions, find cruxes, synthesize, find clearer terms, generate alternative explanations, and so on.


MRS is this week, you can go and join the conversations with people at the metal level. Probably even talk to the authors themselves!


> From my point of view, AlphaGo, AlphaZero, AlphaFold were significant achievements.

These things you mentioned had obvious benchmarks that were easily surpassed by the appropriate "AI". The evidence that they were better wasn't just significant, it was obvious.

This leaves the fact that with what appears to be maximal cooking of the books, the only thing AlphaChip seems to be able to beat is human, manual placement and not anything algorithmic--even from many, many generations ago.

Trying to pass that off as a significant "advance" in a "scientific publication" borders on scientific fraud and should definitely be called out.

The problem here is that I am certain that this is wired to the career trajectories of "Very Important People(tm)" and the fact that it essentially failed miserably is simply not politically allowed.

If they want to lie, they can do that in press releases. If they want published in something reputable, they should have to be able to provide proper evidence for replication.

And, if they can't do that, well, that's an answer itself, no?


> "scientific publication"

These air quotes suggests the commenter above doesn't think the paper qualifies a scientific publication. Such a characterization is unfair.

When I read the Nature article titled "Addendum: A graph placement methodology for fast chip design" [1], I see writing that more than meets the bar for a scientific publication. For example:

> Since publication, we have open-sourced a software repository [21] to fully reproduce the methods described in our paper. External researchers can use this repository to pre-train on a variety of chip blocks and then apply the pre-trained model to new blocks, as was done in our original paper. As part of this addendum, we are also releasing a model checkpoint pre-trained on 20 TPU blocks [22]. For best results, however, we continue to recommend that developers pre-train on their own in-distribution blocks [18], and provide a tutorial on how to perform pre-training with our open-source repository [23].

[1]: https://www.nature.com/articles/s41586-024-08032-5

[18]: Yue, S. et al. Scalability and generalization of circuit training for chip floorplanning. In Proc. 2022 International Symposium on Physical Design 65–70 (2022).

[21]: Guadarrama, S. et al. Circuit Training: an open-source framework for generating chip floor plans with distributed deep reinforcement learning. GitHub https://github.com/google-research/circuit_training (2021).

[23]: Guadarrama, S. et al. Pre-training. GitHub https://github.com/google-research/circuit_training/blob/mai... (2021).


> Trying to pass that off as a significant "advance" in a "scientific publication" borders on scientific fraud and should definitely be called out.

If true, your stated concerns with the AlphaChip paper -- selective benchmarking and potential overselling of results - reflect poor scientific practice and possible intellectual dishonesty. This does not constitute scientific fraud, which occurs when the underlying method/experiment/rules are faked.

If the paper has issues with how it positions and contextualizes its contribution, criticism is warranted, sure. But don't confuse this with "scientific fraud".

Some context: for as long as benchmark suites have existed, people rightly comment on which benchmarks should be included and how they should be weighted.


As someone who has no skin in the game and is only loosely following this, there is a tool: https://github.com/google-research/circuit_training, the detractors claim to not be able to reproduce Google's results (what Dean is commenting on) with it, Google and 1-2 other companies claim to be using it internally to success (e.g. see the end of this article: https://deepmind.google/discover/blog/how-alphachip-transfor...).


There are benchmarks in this space. You can also bring your chip designs into the open and show what happens with different tools. You can run the algorithm on the placed designs that you sponsor for open source VLSI to show how much better they are.

None of this has been done. This is table stakes if you want to talk about your EDA algorithm advancement. If this weren't coming out of Google, everybody would laugh it out of the room (see what happened to a similar publication with similar claims from a Chinese source--everybody dismissed it out of hand--rightfully so even though that paper was MUCH better than anything Google has promulgated).

Extraordinary claims require extraordinary evidence. Nothing about AlphaChip even reaches ordinary evidence.

If they hadn't gotten a publication in Nature for effectively a failure, this would be way less contentious.


    > Nothing about AlphaChip even reaches ordinary evidence.
You reply is wildly confident and dismissive. If correct, why did Nature choose to publish?


Can you stop with this pure appeal to authority. Publishing in nature is not proof it works. It's only proof the paper has packaged the claim it works semi well.


As Markov claims Nature did not follow their own policy. Since Google’s results are only on their designs, no one can replicate them. Nature is single blind, so they probably didn’t want to turn down Jeff Dean so that they wouldn’t lose future business from Google.


> if it worked, they wouldn't be sharing it and would simply be printing money by taking advantage of it.

Sure, there are some techniques in financial markets that are only valuable when they are not widely known. But claiming this pattern applies universally is incorrect.

Publishing a technique doesn't prove it doesn't work. (Stating it this way makes it fairly obvious.)

DeepMind, like many AI research labs, publish important and useful research. One might ask "is a lab leaving money off the table by publishing?". Perhaps a better question is "What 'game' is the lab playing and over what time scale?".


    > EDA companies are garbage
I don't understand this comment. Can you please explain? Are they unethical? Or do they write poor software?


Yes and yes.

EDA companies are gatekeeping monopolies. They absolutely abuse their monopoly position to extract huge chunks of money out of companies, and are pretty much single-handedly responsible for the fact that the hardware startup ecosystem is moribund compared to that of the software startup ecosystem.

They have been horrible liars about performance and benchmarketing for decades. They dragged their feet miserably over releasing Linux versions of their software because they were extracting money based upon number of CPU licenses (everything was on Sparc which was vastly inferior). Their software hasn't really improved all that much over decades--mostly they benefited from Moore's Law. They have made a point of stifling attempts at interoperability and open data exchange. They have bought lots of competitors mostly to just shut them down. I can go on and on.

The EDA companies aren't quite Oracle--but they're not far off.

This is one of the reasons why Google is getting pounded over this--maybe even unfairly. People in the field are super sensitive about bullshit claims from EDA vendors--we've heard them all and been on the receiving end of the stick far too many times.


> The EDA companies aren't quite Oracle--but they're not far off.

Agreed with most you mentioned but not about EDA companies are not worst than Oracle, at least Oracle is still supporting popular and useful open source projects namely MySQL, Virtualbox, etc.

What open-source design software these EDA companies are supporting currently although most of their software originated from open source EDA software from UC Berkeley, etc?


and are pretty much single-handedly responsible for the fact that the hardware startup ecosystem is moribund

Yes but not single-handedly -- it's them and the foundries, hand-in-hand.

No startup can compete with Synopsys because TSMC doesn't give out the true design rules to anybody smaller than Apple for finfet processes. Essentially their DRC+LVS software has become a DRM-encoded version of the design rule manual.


> pretty much single-handedly responsible for the fact that the hardware startup ecosystem is moribund compared to that of the software startup ecosystem.

This was the case before EDA companies even appeared. Hardware is hard because it's manufacturing. You can't "iterate quickly", every iteration costs millions of dollars and so does every mistake.


> Hardware is hard because it's manufacturing. You can't "iterate quickly", every iteration costs millions of dollars and so does every mistake.

This is true for injection molding and yet we do that all the time in small businesses.

A mask set for an older technology can be in the range of $50K-$100K. That's right about the same price as injection molds.

The main difference is that Solidworks is about $25K while Cadence, et al, is about a megabuck.


Agreed, in particular on #2

Given infinite time and compute - maybe the approach is significantly better. But that’s just not practical. So unless you see dramatic shifts - no one is going to throw away proven results on your new approach because of the TTM penalty if it goes wrong.

The EDA industry is (has to be) ultra conservative.


    > The EDA industry is (has to be) ultra conservative.
What is special about EDA that requires it to be more conservative?


Taping out a chip is an incredibly expensive (7-8 figure) fixed cost. If the chips that come out have too many bugs (say because your PD tools missed up some wiring for 1 in 10,000 blocks) then that money is gone. If you're Intel this is enough to make people doubt the health of your firm; if you're a startup, you're just done.


> if it worked, they wouldn't be sharing it and would simply be printing money by taking advantage of it.

This is a fallacious argument. A better chip design process does not eliminate all other risks like product-market fit or the upfront cost of making masks or chronic mismanagement.


Honestly this does not compute

> None of these "AI" tools have yet demonstrated the ability to classify "This is datapath", "This is array logic", "This is random logic".

Sounds like a good objective, one that could be added to training parameters. Or maybe it isn't needed (AI can 'understand' some concepts without explicitly tagging)

> If I have the compute power to run EDA tools with a couple of different random seeds, at least one run will likely be a couple percentage points better.

Then do it?! How long does it actually take to run? I know EDA tools creators are bad at some kinds of code optimization (and yes, it's hard) but let's say for a company like Intel, if it takes 10 days to rerun a chip to get 1% better, that sounds like a worthy tradeoff.

> I put this snake oil in the same category as "financial advice"--if it worked, they wouldn't be sharing it and would simply be printing money by taking advantage of it.

Yeah I don't think you understood the problem here. Good financial advice is about balancing risks and returns.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: