Technology Spotlight

 

In Pursuit of 1000X: Disruptive Research for the Next Decade of Computing

Keynote Video Lecture for Intel Labs Day, 3 December 2020

Coordinated by Dr. Richard Uhlig, Director of Intel Labs

In the past, the semiconductor industry depended on Moore’s Law to achieve exponential improvement in computing performance. But Moore’s Law is approaching its limits, so looking to the future, alternative approaches are needed to maintain growth. Intel Labs is carrying out advanced research into a variety of novel approaches in hardware and software that promise to enable improvements of 1000X or more in terms of speed, efficiency, or other performance metrics. Dr. Uhlig introduces Intel researchers who address the 5 following approaches:
(1) Integrated Photonics
(2) Neuromorphic Computing
(3) Quantum Computing
(4) Confidential Computing
(5) Machine Programming

An overview of the lecture is available at HPCwire.

The 55-minute keynote video and the complete slide deck (PDF, 9 MB) are also available.

The entire set of videos from Intel Labs Day is available at the Intel website.

 

The Future of In-Memory Computing

Keynote address of In-Memory Computing Summit, held online in October 2020.

Presented by Terry Erisman of GridGain Systems.

A large and increasing portion of the computing workload is involved with Big Data, processing large databases in real time. This is most efficiently handled using in-memory computing (IMC), due to the latency associated with moving data into and out of memory. The In-Memory Computing Summit has been an annual series of conferences in the US and Europe, but this year due to COVID-19, it was held as a virtual event.

Mr. Erisman, VP of GridGain Systems gave the opening presentation, reviewing the field and providing an introduction to some of the other talks.

He described how over the past few years, IMC has now become mainstream, mostly in data centers in the cloud, and increasingly uses persistent memory (PMEM), which may be non-volatile.

He also described applications, which were initially mostly in financial services, but have now spread to a wide variety of industries. The growth of IMC is expected to continue through the next decade.

For further details, see the video and slides of the presentation at the IMC Summit website.

Slides and videos of the other presentations at the Summit are also available.

 

Types of Deep Learning Hardware
Interview with Bradley Geden of Synopsys

Artificial intelligence based on “deep learning” is rapidly being implemented in a wide variety of edge systems, mostly using commercial chips that are one of several types. Ed Sperling, Editor of Semiconductor Engineering, interviewed Bradley Geden of Synopsys about these different types, and the various applications for each type.

These include systolic arrays, 2D course-grained reconfigurable arrays, and parallel pipelines. In each case, there is a 2D neural network array of artificial neurons, with matrix operations (multiply-accumulates or MACs) on the values in the neurons. These are all digital computational arrays, in contrast to analog arrays of memristors that are under development elsewhere.

Systolic arrays are hard-wired for MAC operations, and are used for basic image recognition neural networks. The reconfigurable arrays are more like FPGAs, with greater flexibility for a wider range of algorithms, but are more complex to program. Parallel pipelines are optimized for high-speed throughput, involving complex calculations on real-time data.

Mr. Geden also emphasized design synthesis approaches for programming these chips. Given the repetitive nature of these structures, a hierarchical approach may often be more flexible and easier to alter than a “flat” approach. Further information from Synopsys on design tools for AI systems is available at the Synopsys website.

For further details, watch the video.

 

DNA Storage and Computing at Catalog DNA
Interview with CTO Dave Turek

DNA has long been known as a biological data storage medium, with extremely high density, low energy, low error rates, and high long-term stability. It has been predicted as a medium for digital storage as well, but technological write and read rates have thus far been too slow for practicality.

That may be starting to change, according to a new MIT spinoff company called Catalog DNA. The CTO of Catalog, Dave Turek, until recently a VP with IBM, was recently interviewed in a video at InsideHPC.

Mr. Turek explained that Catalog has developed a prototype write system based on ink-jet printing technology, which enables write speeds of greater than 1 MB per second. This was demonstrated by storing the entire collection of Wikipedia pages (about 14 GB) in a small vial of liquid. Readout was also demonstrated using conventional DNA sequencing machines, although this is still somewhat slower. They anticipate major increases in both writing and reading speeds as the technology develops further. In addition, they believe that this technology can go beyond data storage to logical processing as well.

Further details of how this fabrication technology works is shown in this video.

 

“No Transistor Left Behind”
Raja Koduri, VP, Intel

Hot Chips is an annual IEEE Symposium on High-Performance Chips usually held in San Francisco, CA, USA in August. This year, due to COVID-19, it was held as a Virtual Conference. Access the Program at the Hot Chips website.

One of the Keynotes was presented by Raja M. Koduri, Senior Vice President, Chief Architect, and General Manager of Architecture, Graphics, and Software at Intel. The video of his presentation is available, and an overview is available at The Next Platform.

The title was “No Transistor Left Behind,” which presented a review and prospect of the computer industry, focusing on how more efficient usage of transistor resources can bring about major improvement in performance, going far beyond the improvement in the transistors themselves. This will require extensive hardware-software co-design throughout the entire industry.

The talk started with a brief tribute to the late Frances Allen, a pioneering computer scientist who developed some of the first compilers.

Mr. Koduri went on to review past development of computer hardware and software, and implications for the future. Each period was characterized by a dominant set of computer applications. Earlier there was the PC era, followed more recently with the Mobile and Cloud era. We are now entering the Intelligence era, characterized by massive growth of data, which can only be handled by AI systems. The demand for processing is rising exponentially, going up by a factor of 1000 by 2025. This will require both general-purpose and specialized processors, enabling exascale performance that goes beyond supercomputers in data centers. This will require a new contract between hardware and software developers.

 

Brain-Inspired Computing
Catherine Schuman, Oak Ridge National Laboratory

At the MIT Technology Review Symposium on Future Compute, held in Dec. 2019 in Cambridge, Massachusetts, Dr. Schuman provided an overview of Brain-Inspired Computing, also known as Neuromorphic Computing.

Biological brains consist of arrays of neurons connected by synapses, with massive parallelism and distributed logic and memory. Early inspiration by the structure of the brain gave rise to the now established field of artificial neural networks (ANNs), which are being widely applied to artificial intelligence and machine learning. Dr, Schuman contrasted these ANNs with networks that emulate brains somewhat more closely, by including spiking neurons that are extremely low in power. These newer systems may have application to AI in mobile edge systems, where power limitation is more critical. Similar systems may also be applied to simulation of biological brains.

Access the video of Dr. Schuman's talk at the MIT Future Compute website.

This Symposium also included many other talks covering the field of future computing, from AI to quantum computing. Access the agenda and the videos for most of these talks at the MIT Future Compute website.

 

The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February 2020, Dr. Jeff Dean of Google presented an overview of how Google sees the present and future of machine learning (ML).

He presented several examples of recent dramatic improvements in deep learning based on many layers of neural networks, including voice recognition, computer vision, language translation, and more generic “reinforcement learning”.

He distinguished initial training the neural network, which may be quite time-consuming, from subsequent fast operation of the optimized network, known as inference.

He pointed out that tremendous improvements in performance have been achieved with specialized hardware, which is quite different from traditional processors. For example, much of the computation is low-precision matrix multiplication in parallel. He featured the Google Tensor Processing Unit (TPU) chip for inference, which can operate in both data centers and in cell phones.

Finally, he described how Google is using Deep Learning in automated design and layout of the some of the same chips performing Deep Learning. Results indicate that such an automated system can be trained to perform as well as a human designer, but is orders of magnitude faster.

Access the video of Dr. Dean’s presentation.

Access a companion article in the ISSCC 2020 Proceedings at IEEE Xplore.

A preprint of this article is also available at arXiv.org.

Several other plenary talks from ISSCC 2020 are available at the ISSCC website.

 

Wafer Scale Computer Processor

One of the awards given at the recent IEEE Honors Ceremony was the IEEE Spectrum Emerging Technology Award, presented to Cerebras Systems for their Wafer-Scale Engine, or WSE. This is the world’s largest “chip”, which takes up most of the area of a 30-cm silicon wafer. This was designed for improving the speed and performance of deep learning in artificial intelligence (AI), with 400,000 parallel cores and 1.2 trillion transistors.

View the video at IEEE.tv.

This technology was also featured in an article in the January 2020 issue of IEEE Spectrum.

 

The Future of Computing: Bits + Neurons + Qubits

Dr. Dario Gil, Director of Research, IBM

At the International Solid State Circuits Conference (ISSCC) in San Francisco, California, USA in February, Dr. Dario Gil of IBM gave an overview of how IBM sees the future of computing. He projects parallel advances in three technologies: conventional processors (“bits”), neural networks for AI (“neurons”), and quantum processors (“qubits”). Rather than any one of these technologies becoming dominant, he predicts major performance advances in all three, with heterogeneous systems incorporating two or more of these addressing critical problems in the computing environment, integrating cloud and edge computing. Near-term applications of quantum computing may be in quantum simulations for materials development, but longer term advances in AI are possible in combination with bits and neurons.

Access the video of Dr. Gil’s presentation.

Access a companion article in the ISSCC 2020 Proceedings.

A preprint of this article is also available.

Several other plenary talks from ISSCC 2020 are available.

 

The Electronics Research Initiative: Innovating the 4th Wave of Electronics Development
Dr. Mark Rosker, Director of Microsystems Technology Office, DARPA

At the DARPA Electronics Research Initiative Summit 2019 in Detroit, Michigan, USA, Dr. Rosker presented an overview of semiconductor technology development, and the role that the US government has played in coordinating and assisting this in the past, present, and future. A video of his talk is available.

Access the slides from his presentation (PDF, 3 MB).

His key point is that although the exponential improvement in Moore’s Law is sometimes presented as a single development process over 50 years, it is more properly a sequence of several waves of development, each one showing initial growth and later saturation. Each wave has been associated with a set of innovations in materials, devices, circuits, and architectures. The DARPA Electronics Research Initiative is now promoting the 4th wave of semiconductor development, which includes innovations such as 3D heterogeneous integration, optimized AI chips, and designing for cybersecurity.

The program of the Summit is available and includes links to other videos and slides of many of the keynote presentations.

 

IRDS™ 2019 Highlights and Future Directions
Dr. Paolo Gargini, Chairman of IRDS™

IEEE Rebooting Computing Week was held in San Mateo, California, on 4-8 November 2019, and included a Workshop for the International Roadmap on Devices and Systems (IRDS™), the Industry Summit on the Future of Computing, the International Conference on Rebooting Computing (ICRC), and the Workshop on Quantum Computing. Videos of many of the presentations are available on IEEE.tv.

Dr. Gargini presented a brief overview of the past, present, and future of semiconductor roadmaps and IRDS™. Access the video on IEEE.tv.

While traditional 2D scaling is saturating, Dr. Gargini identified 3D power scaling for the period 2025-2040, together with new circuit architectures.

The most recent Roadmap is available on the IRDS™ website.

The new edition of the Roadmap is expected to be online by April.

 

Highlights from the Industry Summit on the Future of Computing
Bruce Kraemer, IEEE Industry Summit Chairman

The IEEE Industry Summit on the Future of Computing was held in San Mateo, California, on 4 November 2019, and consisted of a series of invited talks and panel presentations by leaders in the field. The slides for many of the presentations are linked from the Summit Program, and videos for many of the presentations are available on IEEE.tv, together with other presentations from IEEE Rebooting Computing Week.

Summit Chairman Bruce Kraemer presented a brief overview of invited speakers on quantum computing, AI, memory-centric computing, and a panel on startups, with the video available on IEEE.tv.

In terms of all of these approaches to future computing, the technologies at these preliminary stages are remarkably diverse. For example, speakers on quantum computing presented superconducting, optical, and semiconducting solutions. Performance benchmarks that will permit these alternative technologies to be compared are still being developed. Despite the concerns of some that Moore’s Law is ending, there was agreement that this is an exciting time for the computer industry.

 

AI Systems in a Hybrid World
Dr. Cindy Goldberg, Program Director, IBM Research AI Hardware Center

The IEEE Industry Summit on the Future of Computing was held in San Mateo, California, on 4 November 2019. Many of the invited presentations are available through IEEE.tv. One of the invited speakers was Dr. Cindy Goldberg of IBM Research, who presented an overview of IBM’s programs in developing improved hardware for deep learning on neural networks. While IBM also has a major program in quantum computing, IBM believes that future quantum and AI systems will address very different types of problems with very different types of hardware.

One of the key problems with the present technology of AI based on digital CMOS is that the power consumptions for training and inference are quite large. While following schemes of approximate computing decreases power consumption, new algorithms and architectures may be needed to maintain performance. Further improvement in performance/watt may require analog AI cores with improved memory materials, which IBM is also developing to achieve a new Moore’s Law of improved performance for AI systems in the next decade.

Looking to the future, IBM envisions hybrid cloud computing, comprising combinations of bits, neurons, and qubits, i.e., classical digital, classical analog, and quantum computing.

The entire 25-minute talk is available on IEEE.tv.