Technology Spotlight

 

Brain-Inspired Computing
Catherine Schuman, Oak Ridge National Laboratory

At the MIT Technology Review Symposium on Future Compute, held in Dec. 2019 in Cambridge, Massachusetts, Dr. Schuman provided an overview of Brain-Inspired Computing, also known as Neuromorphic Computing.

Biological brains consist of arrays of neurons connected by synapses, with massive parallelism and distributed logic and memory. Early inspiration by the structure of the brain gave rise to the now established field of artificial neural networks (ANNs), which are being widely applied to artificial intelligence and machine learning. Dr, Schuman contrasted these ANNs with networks that emulate brains somewhat more closely, by including spiking neurons that are extremely low in power. These newer systems may have application to AI in mobile edge systems, where power limitation is more critical. Similar systems may also be applied to simulation of biological brains.

Access the video of Dr. Schuman's talk at the MIT Future Compute website.

This Symposium also included many other talks covering the field of future computing, from AI to quantum computing. Access the agenda and the videos for most of these talks at the MIT Future Compute website.

 

The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February 2020, Dr. Jeff Dean of Google presented an overview of how Google sees the present and future of machine learning (ML).

He presented several examples of recent dramatic improvements in deep learning based on many layers of neural networks, including voice recognition, computer vision, language translation, and more generic “reinforcement learning”.

He distinguished initial training the neural network, which may be quite time-consuming, from subsequent fast operation of the optimized network, known as inference.

He pointed out that tremendous improvements in performance have been achieved with specialized hardware, which is quite different from traditional processors. For example, much of the computation is low-precision matrix multiplication in parallel. He featured the Google Tensor Processing Unit (TPU) chip for inference, which can operate in both data centers and in cell phones.

Finally, he described how Google is using Deep Learning in automated design and layout of the some of the same chips performing Deep Learning. Results indicate that such an automated system can be trained to perform as well as a human designer, but is orders of magnitude faster.

Access the video of Dr. Dean’s presentation.

Access a companion article in the ISSCC 2020 Proceedings at IEEE Xplore.

A preprint of this article is also available at arXiv.org.

Several other plenary talks from ISSCC 2020 are available at the ISSCC website.

 

Wafer Scale Computer Processor

One of the awards given at the recent IEEE Honors Ceremony was the IEEE Spectrum Emerging Technology Award, presented to Cerebras Systems for their Wafer-Scale Engine, or WSE. This is the world’s largest “chip”, which takes up most of the area of a 30-cm silicon wafer. This was designed for improving the speed and performance of deep learning in artificial intelligence (AI), with 400,000 parallel cores and 1.2 trillion transistors.

View the video at IEEE.tv.

This technology was also featured in an article in the January 2020 issue of IEEE Spectrum.

 

The Future of Computing: Bits + Neurons + Qubits

Dr. Dario Gil, Director of Research, IBM

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February, Dr. Dario Gil of IBM gave an overview of how IBM sees the future of computing. He projects parallel advances in three technologies: conventional processors (“bits”), neural networks for AI (“neurons”), and quantum processors (“qubits”). Rather than any one of these technologies becoming dominant, he predicts major performance advances in all three, with heterogeneous systems incorporating two or more of these addressing critical problems in the computing environment, integrating cloud and edge computing. Near-term applications of quantum computing may be in quantum simulations for materials development, but longer term advances in AI are possible in combination with bits and neurons.

Access the video of Dr. Gil’s presentation.

Access a companion article in the ISSCC 2020 Proceedings.

A preprint of this article is also available.

Several other plenary talks from ISSCC 2020 are available.

 

The Electronics Research Initiative: Innovating the 4th Wave of Electronics Development
Dr. Mark Rosker, Director of Microsystems Technology Office, DARPA

At the DARPA Electronics Research Initiative Summit 2019 in Detroit, Michigan, USA, Dr. Rosker presented an overview of semiconductor technology development, and the role that the US government has played in coordinating and assisting this in the past, present, and future. A video of his talk is available.

Access the slides from his presentation (PDF, 3 MB).

His key point is that although the exponential improvement in Moore’s Law is sometimes presented as a single development process over 50 years, it is more properly a sequence of several waves of development, each one showing initial growth and later saturation. Each wave has been associated with a set of innovations in materials, devices, circuits, and architectures. The DARPA Electronics Research Initiative is now promoting the 4th wave of semiconductor development, which includes innovations such as 3D heterogeneous integration, optimized AI chips, and designing for cybersecurity.

The program of the Summit is available and includes links to other videos and slides of many of the keynote presentations.

 

IRDS™ 2019 Highlights and Future Directions
Dr. Paolo Gargini, Chairman of IRDS™

IEEE Rebooting Computing Week was held in San Mateo, California, on 4-8 November 2019, and included a Workshop for the International Roadmap on Devices and Systems (IRDS™), the Industry Summit on the Future of Computing, the International Conference on Rebooting Computing (ICRC), and the Workshop on Quantum Computing. Videos of many of the presentations are available on IEEE.tv.

Dr. Gargini presented a brief overview of the past, present, and future of semiconductor roadmaps and IRDS™. Access the video on IEEE.tv.

While traditional 2D scaling is saturating, Dr. Gargini identified 3D power scaling for the period 2025-2040, together with new circuit architectures.

The most recent Roadmap is available on the IRDS™ website.

The new edition of the Roadmap is expected to be online by April.

 

Highlights from the Industry Summit on the Future of Computing
Bruce Kraemer, IEEE Industry Summit Chairman

The IEEE Industry Summit on the Future of Computing was held in San Mateo, California, on 4 November 2019, and consisted of a series of invited talks and panel presentations by leaders in the field. The slides for many of the presentations are linked from the Summit Program, and videos for many of the presentations are available on IEEE.tv, together with other presentations from IEEE Rebooting Computing Week.

Summit Chairman Bruce Kraemer presented a brief overview of invited speakers on quantum computing, AI, memory-centric computing, and a panel on startups, with the video available on IEEE.tv.

In terms of all of these approaches to future computing, the technologies at these preliminary stages are remarkably diverse. For example, speakers on quantum computing presented superconducting, optical, and semiconducting solutions. Performance benchmarks that will permit these alternative technologies to be compared are still being developed. Despite the concerns of some that Moore’s Law is ending, there was agreement that this is an exciting time for the computer industry.

 

AI Systems in a Hybrid World
Dr. Cindy Goldberg, Program Director, IBM Research AI Hardware Center

The IEEE Industry Summit on the Future of Computing was held in San Mateo, California, on 4 November 2019. Many of the invited presentations are available through IEEE.tv. One of the invited speakers was Dr. Cindy Goldberg of IBM Research, who presented an overview of IBM’s programs in developing improved hardware for deep learning on neural networks. While IBM also has a major program in quantum computing, IBM believes that future quantum and AI systems will address very different types of problems with very different types of hardware.

One of the key problems with the present technology of AI based on digital CMOS is that the power consumptions for training and inference are quite large. While following schemes of approximate computing decreases power consumption, new algorithms and architectures may be needed to maintain performance. Further improvement in performance/watt may require analog AI cores with improved memory materials, which IBM is also developing to achieve a new Moore’s Law of improved performance for AI systems in the next decade.

Looking to the future, IBM envisions hybrid cloud computing, comprising combinations of bits, neurons, and qubits, i.e., classical digital, classical analog, and quantum computing.

The entire 25-minute talk is available on IEEE.tv.