Technology Spotlight

 

Hardware-Software Co-Design for Analog-Digital Accelerator for Machine Learning

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the presentations was by Dr. Dejan Milojicic of Hewlett Packard Labs. Dr. Milojicic is also a co-chair of the IEEE Rebooting Computing Initiative.

Dr. Milojicic spoke about an R&D project to demonstrate a prototype accelerator for machine learning, including both the hybrid analog-digital hardware and the entire software stack. This is a collaboration between Hewlett Packard Enterprise and academic researchers at the University of Illinois and Purdue.

The video of Dr. Milojicic’s talk is available here. The published conference paper is available on IEEE Xplore here.

The core of the accelerator is a crossbar array of memristors, which are used for analog computation of matrix operations, with application to neural networks for machine learning. However, the system also includes key CMOS digital circuits closely integrated with the memristors. The talk emphasized how software co-design with the hardware is essential for developing applications. The software stack consists of an Open Neural Network Exchange converter (ONNX), an application optimizer, a compiler, a driver, and emulators. While this system is not yet a commercial product, it is approaching the stage where it will become available for developing inference applications for machine learning.

Videos of other ICRC 2018 talks are available from IEEE.tv here.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

Reversible Computing for Energy Efficiency

At the recent IEEE International Conference on Rebooting Computing, held in Washington DC in November as part of IEEE Rebooting Computing Week, one of the invited talks was by Dr. Michael Frank of Sandia National Laboratory.

Dr. Frank spoke about “Reversible Computing as a Path Towards Unbounded Energy Efficiency”. The video of Dr. Frank’s talk is available here.

Reversible computing is an alternative paradigm for computing, whereby intermediate data are not discarded or overwritten during a computation, but instead are saved. It has long been known that reversible computing offers the possibility of several orders of magnitude reduction in energy dissipation, but little attention was paid while Moore’s Law was active. Now that Moore’s Law is ending, this novel approach deserves a second look. This will require development of new devices, circuits, systems, and algorithms, but current research suggests that major improvements are possible with fairly modest investments in R&D. Some of this research has used low-dissipation technologies such as superconducting devices, but great improvements are possible even using more conventional CMOS devices.

An earlier introductory article on Reversible Computing by Dr. Frank was presented in IEEE Spectrum in 2017 here.

Videos of other ICRC 2018 talks are available from IEEE.tv.

The Proceedings of ICRC 2018 is available from IEEE Xplore here.

 

The Era of AI Hardware

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Dr. Mukesh Khare, Vice President of Semiconductor Research, IBM Research. A brief introductory video of IBM’s efforts in this field is given here. Dr. Khare’s talk is available here.

The concurrent evolution of broad AI with purpose-built hardware will shift traditional balances between cloud and edge, structured and unstructured data, and training and inference. Distributed deep learning approaches, coupled with heterogeneous system architectures effectively address bandwidth, latency, and scalability requirements of complex AI models. Hardware, purpose-built for AI, holds the potential to unlock exponential gains in AI computations. IBM Research is making further strides in AI hardware through the use of Digital AI Cores using approximate computing, non-Von-Neumann approaches with Analog AI Cores, and the emergence of quantum computing for AI workloads.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.

 

Big Data Meets Big Compute

At the recent IEEE Industry Summit on the Future of Computing, held in Washington DC as part of IEEE Rebooting Computing Week, one of the keynote talks was by Alan Lee, Corporate Vice President, Head of Research, and Head of Deployed AI and Machine Learning Technologies at Advanced Micro Devices (AMD). The video of Mr. Lee’s talk is available here.

Mr. Lee spoke about “Big Data Meets Big Compute.” The volume of data being generated is rising exponentially, much faster than the growth in computing speed. Furthermore, the types of data are quite heterogeneous, as are the types of analysis, which will include artificial intelligence and machine learning (AI/ML). In order for data centers and supercomputers to handle this efficiently, they will need to incorporate a broad range of processors on the hardware level, as well as a complete range of algorithms and applications software. While custom solutions are most efficient in principle, the custom development effort is generally impractical. Mr. Lee recommended a modular approach at multiple levels in the stack. This could include chip-level modularity, whereby chiplets incorporating different processors (CPUs, GPUs, and FPGAs) and memory could be integrated in a semi-custom way on the same multi-chip module. Similarly, one could incorporate open-source software modules that could interface efficiently with the range of hardware. In this way, one can expect to obtain many of the benefits of custom design while minimizing some of the difficulties in programming and testing. The transition to this heterogeneous computing environment has already begun, and will likely continue for at least the next decade.

Videos of other Industry Summit 2018 invited talks are available from IEEE.tv.