How the hell would you verify an AI-generated silicon design?
Like, for a CPU, you want to be sure it behaves properly for the given inputs. Anyone remember that floating point error in, was it Pentium IIs or Pentium IIIs?
I mean, I guess if the chip is designed for AI, and AIs are inherently nonguaranteed output/responses, then the AI chip design being nonguaranteed isn't any difference in nonguarantees.
> How the hell would you verify an AI-generated silicon design?
I think you're asking a different question, but in the context of the OP researchers are exploring AI for solving deterministic but intractable problems in the field of chip design and not generating designs end to end.
Here's an excerpt from the paper.
"The objective is to place
a netlist graph of macros (e.g., SRAMs) and standard cells
(logic gates, such as NAND, NOR, and XOR) onto a chip
canvas, such that power, performance, and area (PPA) are
optimized, while adhering to constraints on placement density and routing congestion (described in Sections 3.3.6
and 3.3.5). Despite decades of research on this problem,
it is still necessary for human experts to iterate for weeks
with the existing placement tools, in order to produce solutions that meet multi-faceted design criteria."
The hope is that Reinforcement Learning can find solutions to such complex optimization problems.
> Despite decades of research on this problem, it is still necessary for human experts to iterate for weeks with the existing placement tools, in order to produce solutions that meet multi-faceted design criteria.
Ironically, this sounds a lot like building a bot to play StarCraft, which is exactly what AlphaStar did. I had no idea that EDA layout is still so difficult and manual in 2024. This seems like a very worth area of research.
I am not an expert in AI/ML, but is the ultimate goal: Train on as many open source circuit designs as possible to build a base, then try to solve IC layouts problems via reinforcement learning, similar to AlphaStar. Finally, use the trained model to do inference during IC layout?
This is not a board where you put resistors, capacitors and ICs on a substrate.
These are chip layouts used for fabbing chips. I don't think you will find many open source designs.
EDAs works closely with foundries (TSMC, Samsung, GlobalFoundaries). This is the bleeding edge stuff to get the best performance for NVIdia or AMD or Intel.
As an individual, it's very hard and expensive to fab your chip (though there are companies that pool multiple designs).
A well working CPU is probably beside the point. What's important now is for researchers to publish papers using or speaking about AI. Then executives and managers to deploy AI in their companies. Then selling AI PC (somehow, we are already at this step). Whatever the results are. Customers issues will be solved by using more AI (think chatbots) until morale improves.
Like, for a CPU, you want to be sure it behaves properly for the given inputs. Anyone remember that floating point error in, was it Pentium IIs or Pentium IIIs?
I mean, I guess if the chip is designed for AI, and AIs are inherently nonguaranteed output/responses, then the AI chip design being nonguaranteed isn't any difference in nonguarantees.
Unless it is...