What's New

Feature Article

Feature Article

IRDS™ Roadmap Chapter on Cryogenic Electronics and Quantum Information Processing (CEQIP)

The 2020 IRDS™ Roadmap includes a chapter on CEQIP chaired by Dr. Scott Holmes of Booz-Allen, IARPA, and the IEEE Council on Superconductivity.

This chapter describes several developing technologies, which do not yet have many mature products.

These include superconducting electronics, cryogenic semiconductor electronics, and quantum computing.

Superconducting electronic systems typically consist of medium-scale integrated circuits based on niobium Josephson junctions, which operate at cryogenic temperatures of around 4 K. Applications are developing in digital signal processing at radio frequencies, and ultra-low-power computers.

Cryogenic semiconductor electronics may be designed to operate below 100 K, or even less than 1 K. These are typically interface circuits for cryogenic sensor arrays and superconducting electronic systems.

Quantum computing systems are in the research stage, with many alternative technologies being explored for making arrays of quantum bits or “qubits”. The leading technologies at present are superconducting circuits and trapped ions, but others are surveyed as well.

Access the CEQIP chapter at the IRDS™ website.

This is available online without charge, however, users must first subscribe to the IRDS™ Technical Community.

Other IRDS™ Chapters are available at the IRDS™ website.

A video overview by Dr. Holmes about the CEQIP chapter last year is also available at IEEE.tv.

Technology Spotlight

Technology Spotlight

The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February 2020, Dr. Jeff Dean of Google presented an overview of how Google sees the present and future of machine learning (ML).

He presented several examples of recent dramatic improvements in deep learning based on many layers of neural networks, including voice recognition, computer vision, language translation, and more generic “reinforcement learning”.

He distinguished initial training the neural network, which may be quite time-consuming, from subsequent fast operation of the optimized network, known as inference.

He pointed out that tremendous improvements in performance have been achieved with specialized hardware, which is quite different from traditional processors. For example, much of the computation is low-precision matrix multiplication in parallel. He featured the Google Tensor Processing Unit (TPU) chip for inference, which can operate in both data centers and in cell phones.

Finally, he described how Google is using Deep Learning in automated design and layout of the some of the same chips performing Deep Learning. Results indicate that such an automated system can be trained to perform as well as a human designer, but is orders of magnitude faster.

Access the video of Dr. Dean’s presentation.

Access a companion article in the ISSCC 2020 Proceedings at IEEE Xplore.

A preprint of this article is also available at arXiv.org.

Several other plenary talks from ISSCC 2020 are available at the ISSCC website.