Technology Spotlight - 2019

 

IEEE ICRC Keynote: Developing our Quantum Future
Dr. Krysta Svore, General Manager of Quantum Software at Microsoft

At the IEEE International Conference on Rebooting Computing (ICRC) in San Mateo, California, on 6 November 2019, Dr. Svore presented the keynote address on Quantum Computing at Microsoft.

Microsoft has an extended quantum computing program with multiple collaborations worldwide, going from novel device development through system architectures to algorithms and software. An overview is available at Microsoft, and a brief 2-minute video is available on YouTube.

On the software end, these include a Quantum Development Kit. Cloud access to a variety of types of experimental qubits will soon be available through Azure Quantum, as described in a recent article from Wired.

Dr. Svore emphasized that useful applications of these prototype quantum computing systems are not expected in the near future, but that in the meantime, developers are encouraged to use the Microsoft tools to identify applications where quantum algorithms will have the greatest impact.

The entire Keynote address by Dr. Svore is available at the bottom of the ICRC 2019 Highlights page.

Read further background on Dr. Svore.

 

Digital Annealer Chip for Optimization Problems

“Quantum-inspired” computing using custom CMOS chip at room temperature

There has been considerable attention in recent years to a superconducting quantum computer specially designed for solving combinatorial optimization problems using “quantum annealing”. But quantum annealing is a variant of simulated annealing, which is a well-known method for solving similar optimization problems using a classical digital computer. However, a standard microprocessor is not configured to solve such a problem efficiently, particularly when the data set becomes very large. A custom architecture with distributed memory and parallelism might be much faster.

With that in mind, Fujitsu developed a “digital annealing unit” (DAU), a custom CMOS chip with an architecture designed to address large-scale optimization problems more efficiently. This has been called “quantum-inspired”, but it is really a standard CMOS chip similar to an FPGA. The first-generation digital annealer chip was introduced last year, and was described in IEEE Spectrum. The second-generation chip, for somewhat larger data sets, was introduced recently.

View a video introduction to the Fujitsu Digital Annealer and a video overview of applications for the digital annealer. Further information is available from Fujitsu.

In comparison, D-Wave Systems is selling an alternative Quantum Annealer based around a superconducting chip cooled to -273 C. The latest generation of this quantum annealer is described in Communications of the ACM. While a quantum annealer may in principle solve problems much faster than a classical annealer, this advantage has not yet been convincingly demonstrated in real systems.

Whether these systems are quantum or “quantum-inspired”, they provide novel processors for future Big Data problems.

 

Low-Power Image Recognition Challenge (LPIRC)

LPIRC has been held annually since 2015, with the aim of improving the energy efficiency of computer vision technology. This year a record number of 22 teams participated and they submitted 234 solutions. Most solutions are significantly better than the solutions in 2018. This challenge had two different tracks: object detection and image classification.

The 2019 LPIRC Workshop was held in Long Beach, California, as part of the Computer Vision and Pattern Recognition Conference (CVPR 2019). View the program for this LPIRC Workshop.

This included presentations by previous winners, plus invited speakers from Google, Xilinx, UC Berkeley, MIT, Qualcomm, and Arizona State University.

Videos of the presentations are available via IEEE.tv. Watch the conference overview.

The other presentations are linked via the program.

Another LPIRC Workshop is being held as part of the International Conference on Computer Vision (ICCV) in Seoul, Korea, 28 October 2019. View the program.

 

Delivering the Future of High-Performance Computing

Dr. Lisa Su, President of Advanced Micro Devices

At the recent DARPA Electronics Resurgence Initiative Summit, a keynote talk was given by Dr. Lisa Su of AMD, focusing on how the semiconductor industry is meeting the growing demands of future high-performance computing as Moore’s Law is slowing down.

Dr. Su explained that although Moore’s Law scaling to 5 nm is continuing, the pace of such progress is slowing down. But the performance improvement continues to develop at a rapid rate, due to a combination of three factors:

  • Microarchitecture on chip
  • Multi-chip packaging of chiplets
  • Integration of heterogeneous processors

Optimum system performance requires co-design of silicon chips, system architecture, and software. She presented the example of the exascale computer system being developed at Oak Ridge National Lab, which should exhibit 1.5 exaflops by 2021. This is a partnership of AMD and Cray, as is further described in this HPCwire article.

While the highest-performance chips and systems will initially be limited to the most expensive machines, it is expected that similar technology will become available within a few years in data centers, edge computers, and even mobile devices.

Watch the video presentation by Dr. Su.
Other videos from the Summit are available on the DARPAtv YouTube Channel.

 

Overview of IRDS™ Chapter on Beyond CMOS and Emerging Research Materials

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble France, 2-5 April 2019. This was co-sponsored by IRDS™, IEEE Rebooting Computing, and the European SiNANO Institute. See the conference program on the INC 2019 website.

INC 2019 included outbriefs of the key results from the new IRDS™ Roadmap, which is now available online. An overview of the latest IRDS™ report is available on HPCwire. An overview of the Beyond CMOS chapter is available on the IRDS™ website.

The talks from INC 2019 were recorded and are available on IEEE.tv.

Dr. Shamik Das of Mitre Corporation presented an overview of the Beyond CMOS chapter of the IRDS™ Roadmap. The objective of Beyond CMOS is to promote future devices that can support the application drivers of future computation requirements, including exascale, Big Data, IoT, and AI. Dr. Das focuses on technology updates in emerging memory devices, logic and information processing devices, then closes with a summary of device-architecture-system interactions. Novel analog and optical components as well as novel materials will need to be integrated closely with silicon.

The video presentation by Dr. Das is available on IEEE.tv.

 

Overview of IRDS™ Chapter on Cryogenic Electronics and Quantum Information Processing

First roadmap report on new technologies of Cryogenic Semiconductors, Superconducting Electronics, and Quantum Computing

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble, France, 2-5 April 2019. This was co-sponsored by IRDS™, IEEE Rebooting Computing, and the European SiNANO Institute. See the conference program on the INC 2019 website.

This included outbriefs of the chapters of the new 2018 International Roadmap on Devices and Systems™, which are now available online. Note that these chapters are accessible to participants of the IEEE IRDS™ Technical Community, which is free to join.

Several of the talks from the INC 2019 were recorded and are available on IEEE.tv.

These included an overview talk by Dr. D. Scott Holmes of DARPA, who is the chair of the International Focus Team (IFT) that prepared the chapter on Cryogenic Electronics and Quantum Information Processing (CE & QIP). These devices and systems presently have limited practical applications, and have previously been included in IRDS™ Reports only in Emerging Research Devices. Some of these technologies are developing rapidly, so this year for the first time, there is a separate IFT and roadmap chapter on CE & QIP. Note that while not all technologies in QIP are cryogenic, many are based on superconducting circuits that operate at deep cryogenic temperatures of less than 1 K, so this combination of CE and QIP seems to present a natural fit.

Roadmaps are being developed for superconducting electronics, while practical QIP is still early-stage.

The video presentation by Dr. Holmes is available on IEEE.tv.

 

Generating Stochastic Bits using Tunable Quantum Systems

Nanoscale Quantum Dots can generate time series of truly random bits for stochastic computing.

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble France, 2-5 April 2019. This was co-sponsored by IRDS, IEEE Rebooting Computing, and the European SiNANO Institute. See here for the conference program.

Several of the talks from the INC 2019 were recorded and are available on IEEE.tv.

These included a talk by Prof. Erik Blair of Baylor University, Waco Texas. Dr. Blair spoke about stochastic computing, which requires a time series of uncorrelated random bits. While stochastic bits can be generated using a classical pseudo-random number generator, he proposed using the quantum properties of a two-level quantum system comprising two quantum dots on the nanometer scale. These quantum dots can be made lithographically, or alternatively the quantum properties of certain molecules can be used. Dr. Blair proposed that this nanoscale Quantum Stochastic Number Generator could be integrated with nanoscale CMOS or Quantum Cellular Automata circuits to implement a nanoscale stochastic computer.

The video presentation by Prof. Blair is available here

 

The Computing Landscape of the 21st Century

The 4 tiers of future computing

At the recent International Workshop on Mobile Computing Systems and Applications (HotMobile 2019), held in Santa Cruz, California in February, Carnegie-Mellon Prof. Mahadev Satyanaryanan gave a keynote presentation on how future computing will be organized.

He emphasized that although computing technology is changing, the computing landscape is likely to be organized around 4 tiers, each with its own characteristic scale and power budget. The top tier comprises cloud computing in data centers, followed by a second tier of edge computers linked to the network. The third tier comprises small mobile devices, including the Internet of Things (IoT), powered by batteries. The final tier comprises networks of sensors, either passive or powered by energy harvesting.

The video of Prof. Satyanarayanan’s talk is available here.

The published conference paper is available here.

Videos of other HotMobile talks are available here.

The Proceedings of HotMobile 2019 is available here.

 

Hardware-Software Co-Design for Analog-Digital Accelerator for Machine Learning

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the presentations was by Dr. Dejan Milojicic of Hewlett Packard Labs. Dr. Milojicic is also a co-chair of the IEEE Rebooting Computing Initiative.

Dr. Milojicic spoke about an R&D project to demonstrate a prototype accelerator for machine learning, including both the hybrid analog-digital hardware and the entire software stack. This is a collaboration between Hewlett Packard Enterprise and academic researchers at the University of Illinois and Purdue.

The video of Dr. Milojicic’s talk is available here. The published conference paper is available on IEEE Xplore here.

The core of the accelerator is a crossbar array of memristors, which are used for analog computation of matrix operations, with application to neural networks for machine learning. However, the system also includes key CMOS digital circuits closely integrated with the memristors. The talk emphasized how software co-design with the hardware is essential for developing applications. The software stack consists of an Open Neural Network Exchange converter (ONNX), an application optimizer, a compiler, a driver, and emulators. While this system is not yet a commercial product, it is approaching the stage where it will become available for developing inference applications for machine learning.

Videos of other ICRC 2018 talks are available from IEEE.tv here.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

Reversible Computing for Energy Efficiency

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the invited talks was by Dr. Michael Frank of Sandia National Laboratory.

Dr. Frank spoke about “Reversible Computing as a Path Towards Unbounded Energy Efficiency”. The video of Dr. Frank’s talk is available here.

Reversible computing is an alternative paradigm for computing, whereby intermediate data are not discarded or overwritten during a computation, but instead are saved. It has long been known that reversible computing offers the possibility of several orders of magnitude reduction in energy dissipation, but little attention was paid while Moore’s Law was active. Now that Moore’s Law is ending, this novel approach deserves a second look. This will require development of new devices, circuits, systems, and algorithms, but current research suggests that major improvements are possible with fairly modest investments in R&D. Some of this research has used low-dissipation technologies such as superconducting devices, but great improvements are possible even using more conventional CMOS devices.

An earlier introductory article on Reversible Computing by Dr. Frank was presented in IEEE Spectrum in 2017 here.

Videos of other ICRC 2018 talks are available from IEEE.tv.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

The Era of AI Hardware

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Dr. Mukesh Khare, Vice President of Semiconductor Research, IBM Research. A brief introductory video of IBM’s efforts in this field is given here. Dr. Khare’s talk is available here.

The concurrent evolution of broad AI with purpose-built hardware will shift traditional balances between cloud and edge, structured and unstructured data, and training and inference. Distributed deep learning approaches, coupled with heterogeneous system architectures effectively address bandwidth, latency, and scalability requirements of complex AI models. Hardware, purpose-built for AI, holds the potential to unlock exponential gains in AI computations. IBM Research is making further strides in AI hardware through the use of Digital AI Cores using approximate computing, non-Von-Neumann approaches with Analog AI Cores, and the emergence of quantum computing for AI workloads.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.

 

Big Data Meets Big Compute

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Alan Lee, Corporate Vice President, Head of Research, and Head of Deployed AI and Machine Learning Technologies at Advanced Micro Devices (AMD). The video of Mr. Lee’s talk is available here.

Mr. Lee spoke about “Big Data Meets Big Compute.” The volume of data being generated is rising exponentially, much faster than the growth in computing speed. Furthermore, the types of data are quite heterogeneous, as are the types of analysis, which will include artificial intelligence and machine learning (AI/ML). In order for data centers and supercomputers to handle this efficiently, they will need to incorporate a broad range of processors on the hardware level, as well as a complete range of algorithms and applications software. While custom solutions are most efficient in principle, the custom development effort is generally impractical. Mr. Lee recommended a modular approach at multiple levels in the stack. This could include chip-level modularity, whereby chiplets incorporating different processors (CPUs, GPUs, and FPGAs) and memory could be integrated in a semi-custom way on the same multi-chip module. Similarly, one could incorporate open-source software modules that could interface efficiently with the range of hardware. In this way, one can expect to obtain many of the benefits of custom design while minimizing some of the difficulties in programming and testing. The transition to this heterogeneous computing environment has already begun, and will likely continue for at least the next decade.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.