During a talk at this year's GPU Technology Conference, Nvidia's chief scientist and senior vice president of research, Bill Dally, talked a great deal about using GPUs to accelerate various stages of the design process behind modern GPUs and other SoCs.
Nvidia believes that some tasks could be done better and much quicker using machine learning rather than humans doing by hand, freeing them to work on more advanced aspects of chip development.
Dally says Nvidia has identified four areas where using machine learning techniques can significantly impact the typical development timetable.
Mapping, where power is used in a GPU is an iterative process that takes three hours on a conventional CAD tool, but it only takes minutes using an AI model trained specifically for this task.
Once taught, the model can shave the time down to seconds. Of course, AI models trade speed for accuracy. Dally says Nvidia's tools already achieve 94 per cent accuracy
Training AI models to make accurate predictions on parasitics can help eliminate a lot of the manual work involved in making the minor adjustments needed for meeting the desired design specifications. Nvidia can use GPUs to predict parasitics using graph neural networks.
Dally explained that one of the biggest challenges in designing modern chips is routing congestion — a defect in a particular circuit layout where the transistors and the many tiny wires that connect them are not optimally placed. This condition can lead to something akin to a traffic jam, but in this case, it's bits instead of cars. Engineers can quickly identify problem areas and adjust their placing and routing accordingly by using a graph neural network.
According to Techspot Nvidia is essentially trying to use AI to critique chip designs made by humans. Instead of embarking on a labour-intensive and computationally expensive process, engineers can create a surrogate model and quickly evaluate and iterate on it using AI. The company also wants to use AI to design the most basic features of the transistor logic used in GPUs and other advanced silicon.
The trained AI model is used to correct design errors until it is completed. Nvidia claims that to date, it has achieved a success rate of 92 percent. In some cases, the AI-engineered cells were smaller than those made by humans. This breakthrough could help improve the design's overall performance and reduce the chip size and power requirements.