The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design

 

 

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February 2020, Dr. Jeff Dean of Google presented an overview of how Google sees the present and future of machine learning (ML).

He presented several examples of recent dramatic improvements in deep learning based on many layers of neural networks, including voice recognition, computer vision, language translation, and more generic “reinforcement learning”.

He distinguished initial training the neural network, which may be quite time-consuming, from subsequent fast operation of the optimized network, known as inference.

He pointed out that tremendous improvements in performance have been achieved with specialized hardware, which is quite different from traditional processors. For example, much of the computation is low-precision matrix multiplication in parallel. He featured the Google Tensor Processing Unit (TPU) chip for inference, which can operate in both data centers and in cell phones.

Finally, he described how Google is using Deep Learning in automated design and layout of the some of the same chips performing Deep Learning. Results indicate that such an automated system can be trained to perform as well as a human designer, but is orders of magnitude faster.

Access a companion article in the ISSCC 2020 Proceedings at IEEE Xplore.

A preprint of this article is also available at arXiv.org.

Several other plenary talks from ISSCC 2020 are available at the ISSCC website.