Technology Spotlight

 

Overview of IRDS™ Chapter on Beyond CMOS and Emerging Research Materials

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble France, 2-5 April 2019. This was co-sponsored by IRDS™, IEEE Rebooting Computing, and the European SiNANO Institute. See the conference program on the INC 2019 website.

INC 2019 included outbriefs of the key results from the new IRDS™ Roadmap, which is now available online. An overview of the latest IRDS™ report is available on HPCwire. An overview of the Beyond CMOS chapter is available on the IRDS™ website.

The talks from INC 2019 were recorded and are available on IEEE.tv.

Dr. Shamik Das of Mitre Corporation presented an overview of the Beyond CMOS chapter of the IRDS™ Roadmap. The objective of Beyond CMOS is to promote future devices that can support the application drivers of future computation requirements, including exascale, Big Data, IoT, and AI. Dr. Das focuses on technology updates in emerging memory devices, logic and information processing devices, then closes with a summary of device-architecture-system interactions. Novel analog and optical components as well as novel materials will need to be integrated closely with silicon.

The video presentation by Dr. Das is available on IEEE.tv.

 

Overview of IRDS™ Chapter on Cryogenic Electronics and Quantum Information Processing

First roadmap report on new technologies of Cryogenic Semiconductors, Superconducting Electronics, and Quantum Computing

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble, France, 2-5 April 2019. This was co-sponsored by IRDS™, IEEE Rebooting Computing, and the European SiNANO Institute. See the conference program on the INC 2019 website.

This included outbriefs of the chapters of the new 2018 International Roadmap on Devices and Systems™, which are now available online. Note that these chapters are accessible to participants of the IEEE IRDS™ Technical Community, which is free to join.

Several of the talks from the INC 2019 were recorded and are available on IEEE.tv.

These included an overview talk by Dr. D. Scott Holmes of DARPA, who is the chair of the International Focus Team (IFT) that prepared the chapter on Cryogenic Electronics and Quantum Information Processing (CE & QIP). These devices and systems presently have limited practical applications, and have previously been included in IRDS™ Reports only in Emerging Research Devices. Some of these technologies are developing rapidly, so this year for the first time, there is a separate IFT and roadmap chapter on CE & QIP. Note that while not all technologies in QIP are cryogenic, many are based on superconducting circuits that operate at deep cryogenic temperatures of less than 1 K, so this combination of CE and QIP seems to present a natural fit.

Roadmaps are being developed for superconducting electronics, while practical QIP is still early-stage.

The video presentation by Dr. Holmes is available on IEEE.tv.

 

Generating Stochastic Bits using Tunable Quantum Systems

Nanoscale Quantum Dots can generate time series of truly random bits for stochastic computing.

The International Nanodevices and Computing Conference (INC 2019) was recently held in Grenoble France, 2-5 April 2019. This was co-sponsored by IRDS, IEEE Rebooting Computing, and the European SiNANO Institute. See here for the conference program.

Several of the talks from the INC 2019 were recorded and are available on IEEE.tv.

These included a talk by Prof. Erik Blair of Baylor University, Waco Texas. Dr. Blair spoke about stochastic computing, which requires a time series of uncorrelated random bits. While stochastic bits can be generated using a classical pseudo-random number generator, he proposed using the quantum properties of a two-level quantum system comprising two quantum dots on the nanometer scale. These quantum dots can be made lithographically, or alternatively the quantum properties of certain molecules can be used. Dr. Blair proposed that this nanoscale Quantum Stochastic Number Generator could be integrated with nanoscale CMOS or Quantum Cellular Automata circuits to implement a nanoscale stochastic computer.

The video presentation by Prof. Blair is available here

 

The Computing Landscape of the 21st Century

The 4 tiers of future computing

At the recent International Workshop on Mobile Computing Systems and Applications (HotMobile 2019), held in Santa Cruz, California in February, Carnegie-Mellon Prof. Mahadev Satyanaryanan gave a keynote presentation on how future computing will be organized.

He emphasized that although computing technology is changing, the computing landscape is likely to be organized around 4 tiers, each with its own characteristic scale and power budget. The top tier comprises cloud computing in data centers, followed by a second tier of edge computers linked to the network. The third tier comprises small mobile devices, including the Internet of Things (IoT), powered by batteries. The final tier comprises networks of sensors, either passive or powered by energy harvesting.

The video of Prof. Satyanarayanan’s talk is available here.

The published conference paper is available here.

Videos of other HotMobile talks are available here.

The Proceedings of HotMobile 2019 is available here.

 

Hardware-Software Co-Design for Analog-Digital Accelerator for Machine Learning

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the presentations was by Dr. Dejan Milojicic of Hewlett Packard Labs. Dr. Milojicic is also a co-chair of the IEEE Rebooting Computing Initiative.

Dr. Milojicic spoke about an R&D project to demonstrate a prototype accelerator for machine learning, including both the hybrid analog-digital hardware and the entire software stack. This is a collaboration between Hewlett Packard Enterprise and academic researchers at the University of Illinois and Purdue.

The video of Dr. Milojicic’s talk is available here. The published conference paper is available on IEEE Xplore here.

The core of the accelerator is a crossbar array of memristors, which are used for analog computation of matrix operations, with application to neural networks for machine learning. However, the system also includes key CMOS digital circuits closely integrated with the memristors. The talk emphasized how software co-design with the hardware is essential for developing applications. The software stack consists of an Open Neural Network Exchange converter (ONNX), an application optimizer, a compiler, a driver, and emulators. While this system is not yet a commercial product, it is approaching the stage where it will become available for developing inference applications for machine learning.

Videos of other ICRC 2018 talks are available from IEEE.tv here.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

Reversible Computing for Energy Efficiency

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the invited talks was by Dr. Michael Frank of Sandia National Laboratory.

Dr. Frank spoke about “Reversible Computing as a Path Towards Unbounded Energy Efficiency”. The video of Dr. Frank’s talk is available here.

Reversible computing is an alternative paradigm for computing, whereby intermediate data are not discarded or overwritten during a computation, but instead are saved. It has long been known that reversible computing offers the possibility of several orders of magnitude reduction in energy dissipation, but little attention was paid while Moore’s Law was active. Now that Moore’s Law is ending, this novel approach deserves a second look. This will require development of new devices, circuits, systems, and algorithms, but current research suggests that major improvements are possible with fairly modest investments in R&D. Some of this research has used low-dissipation technologies such as superconducting devices, but great improvements are possible even using more conventional CMOS devices.

An earlier introductory article on Reversible Computing by Dr. Frank was presented in IEEE Spectrum in 2017 here.

Videos of other ICRC 2018 talks are available from IEEE.tv.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

The Era of AI Hardware

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Dr. Mukesh Khare, Vice President of Semiconductor Research, IBM Research. A brief introductory video of IBM’s efforts in this field is given here. Dr. Khare’s talk is available here.

The concurrent evolution of broad AI with purpose-built hardware will shift traditional balances between cloud and edge, structured and unstructured data, and training and inference. Distributed deep learning approaches, coupled with heterogeneous system architectures effectively address bandwidth, latency, and scalability requirements of complex AI models. Hardware, purpose-built for AI, holds the potential to unlock exponential gains in AI computations. IBM Research is making further strides in AI hardware through the use of Digital AI Cores using approximate computing, non-Von-Neumann approaches with Analog AI Cores, and the emergence of quantum computing for AI workloads.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.

 

Big Data Meets Big Compute

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Alan Lee, Corporate Vice President, Head of Research, and Head of Deployed AI and Machine Learning Technologies at Advanced Micro Devices (AMD). The video of Mr. Lee’s talk is available here.

Mr. Lee spoke about “Big Data Meets Big Compute.” The volume of data being generated is rising exponentially, much faster than the growth in computing speed. Furthermore, the types of data are quite heterogeneous, as are the types of analysis, which will include artificial intelligence and machine learning (AI/ML). In order for data centers and supercomputers to handle this efficiently, they will need to incorporate a broad range of processors on the hardware level, as well as a complete range of algorithms and applications software. While custom solutions are most efficient in principle, the custom development effort is generally impractical. Mr. Lee recommended a modular approach at multiple levels in the stack. This could include chip-level modularity, whereby chiplets incorporating different processors (CPUs, GPUs, and FPGAs) and memory could be integrated in a semi-custom way on the same multi-chip module. Similarly, one could incorporate open-source software modules that could interface efficiently with the range of hardware. In this way, one can expect to obtain many of the benefits of custom design while minimizing some of the difficulties in programming and testing. The transition to this heterogeneous computing environment has already begun, and will likely continue for at least the next decade.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.