Feature Articles - 2019

 

Thermodynamic Computing Workshop Report

Thermodynamic Computing is a new concept for future energy-efficient computing, inspired by thermodynamics and biological self-organizing systems.

A Visioning Workshop on Thermodynamic Computing was held in January 2019, in Honolulu, Hawaii, sponsored by the Computing Community Consortium (CCC).

The report on this workshop is now available at the Computing Research Association (PDF, 3 MB).

An overview of the report is available at the CCC Blog.

The architecture of a thermodynamic computer is still being envisioned. This workshop brought together a wide variety of researchers in electrical and computer engineering, physics, and biology. This is related to other approaches such as reversible and stochastic computing, but also envisions “intelligent” systems that are self-organizing and self-programming. This may also represent a class of computers that are intermediate between classical von-Neumann computers and quantum computers that are currently in the research stage. The report recommended continued research into the theory, devices, and architectures of thermodynamic computers.

Two related podcasts on the subject of thermodynamic computing, with some of the workshop organizers, are available at SoundCloud. Listen to Part 1 and Part 2.

 

IRDS™ Roadmap Executive Summary

The International Roadmap for Devices and Systems™ issued its latest roadmap report, projecting the next 15-20 years of electronic devices, circuits, and systems.

Moore’s Law scaling has been based on packing more transistors on a chip. While traditional 2D scaling is saturating, 3D scaling of vertical transistors will enable continued increase in transistor density and system performance for another 20 years. IRDS™ is projecting how this scaling will continue through the electronics and computer industry internationally, and implications for a variety of applications such as IoT, mobile, cloud, 5G, medical and automotive systems. A new focus for this year is on Cryogenic Electronics and Quantum Information Processing. For next year, a new focus will be on artificial intelligence and machine learning (AI/ML).

IRDS™ is sponsored by IEEE Rebooting Computing and IEEE Standards Association, as well as SINANO Institute in Europe, the System Device Roadmap Committee of Japan (SDRJ), and the International Electronics Manufacturing Initiative (iNEMI).

Access the IRDS™ Roadmap. The complete report has many chapters, but an overview is available via the Executive Summary. Note that while the Roadmap documents are available free of charge, users must register for the IRDS™ Technical Community.

 

Stochastic Magnetic Circuits Rival Quantum Computing

Circuits based on the stochastic evolution of nanoscale magnets have been used to split large numbers into prime-number factors — a problem that only quantum computers were previously expected to solve efficiently.

One of the driving forces behind the development of quantum computers has been an algorithm that can factor large integers, thus undermining encryption protocols. Current quantum computers cannot achieve this due to noise, and classical digital computers are far too slow. In contrast, Dr. Dmitri Nikonov of Intel presented an overview of some new research from Purdue and Tohoku Universities that promises practical factorization of large integers in the near future, without quantum computing.

This work is based on neither classical bits nor quantum bits (known as qubits), but on a distinct third foundation known as probabilistic bits or p-bits. Each p-bit fluctuates rapidly in real time between two configurations with a known probability, thus enabling probabilistic or stochastic computing.

The specific implementation is an integrated circuit of 8 nanomagnets, each fluctuating between an up and down state. Operating at room temperature, these are magnetic tunnel junctions similar to devices already being used for MRAM. The authors demonstrate an experimental circuit that can factor integers up to 945, and suggest that scaling to larger circuits that can factor much larger numbers is possible in the near term.

This may provide an example of “quantum-inspired” computing, whereby unconventional classical computer architectures may be used to address problems that were previously thought to require true quantum computing methods.

Read the news article by Dr. Nikonov, available without charge.

Access the research article, “Integer factorization using stochastic magnetic tunnel junctions”.

 

Beyond the Qubit: Quantum Computing, Practical Alternatives, and Memory-Driven Computing

Hewlett-Packard Enterprise recently published a white paper (PDF, 770 KB) by Ray Beausoleil and Rebecca Lewington, arguing that for most applications of computing in the near future, novel approaches to classical computing may offer much greater performance than quantum computing.

They suggest that quantum computers may be ideal for modeling quantum systems, and for special problems such as decryption once noise problems can be overcome. However, for problems associated with Big Data analytics, a completely different approach is needed. The paper identifies an analog neuromorphic processor, the “Dot-Product Engine” built around a memristor array, as a more practical way to address the problems associated with Deep Learning on a large database.

They further identify a class of problems associated with optimization, such as the traveling salesman problem, which can be addressed by a novel Optical Computing technology, known as the Coherent Ising Engine. They project performance superior to that in superconducting quantum annealers, which have been proposed for similar problems.

Finally, they promote the paradigm of Memory-Driven Computing for Big-Data analytics, with close integration of memory chips and heterogeneous processors within a high-speed interconnect fabric.

A video addressing similar issues is also available on the HPE Discover website.

 

Superconducting Neurons Could Match the Power Efficiency of the Brain

MIT Technology Review features a new research paper from MIT and Colgate University on ultra-low-power electronic neurons based on superconducting nanowires. Read an overview of the research at MIT Technology Review.

Based on the switching properties of 100-nm-wide superconducting nanowires, the researchers have developed a superconducting neuron with a firing threshold, a refractory period, and a travel time that can be adjusted according to the circuit design. Furthermore, this superconducting neuron can also be used to trigger or inhibit other neurons via a synapse, with a significant fanout capability.

Simulations show that a neuromorphic computing system based on these superconducting neurons and synapses should have a figure of merit (synaptic operations per second per watt) of more than 1014. This includes an estimate of the cooling power required for cryogenic devices. This is comparable to the figure of merit for biological brains, but four orders of magnitude larger than the corresponding values for semiconducting neuromorphic computers, due to the very low power of superconducting devices and interconnects. Furthermore, the superconducting neurons are much faster than biological neurons, and are also faster than semiconducting neurons. While this initial research was mostly theoretical, it is promising for the development of a superconducting neuromorphic computer.

A preprint of the research article, “A Power-Efficient Artificial Neuron Using Superconducting Nanowires”, by Emily Toomey, Ken Segall, and Karl Berggren, is available at arXiv.org.

 

Nonvolatile Memory for Efficient Implementation of Neuromorphic Computing

Special Issue of IEEE Journal on Exploratory Solid-State Computational Devices and Circuits (JXCDC)

In the June issue of JXCDC, Prof. Shimeng Yu of Georgia Tech introduces a special issue of six articles addressing how certain nonvolatile memory (NVM) chips may carry out neural network computations more efficiently and faster than conventional CMOS. Neural Networks for practical problems may have many hidden layers and may require the storage of millions of parameters during the data processing. This may be important for applications of artificial intelligence and deep learning in edge systems, where power and circuit density may be constrained.

The NVM of interest here cover a variety of device technologies, including floating gate or charge-trap memories, resistive memories, phase change memories, spintronic memories, and ferroelectric memories. The topics of these six papers range from neuron device design using emerging spintronic effects, new computational models implemented with spintronic devices, to neural network performance analysis with realistic device properties such as floating-gate memory and phase change memory.

The introduction to the special issue is available on IEEE Xplore.

The full set of articles in this issue is also available on IEEE Xplore.

Note that JXCDC is an open-access journal, so that both the introduction and the articles are freely accessible to all readers.

 

Neural Algorithms and Computing Beyond Moore’s Law

A variety of novel algorithms can be obtained by observing the neural structure of different parts of the brain.

In the April issue of the Communications of the ACM, Dr. James Aimone of Sandia National Laboratory presented an overview of how neural structures in the brain are inspiring new architectures and algorithms for electronic computing. Many of these neural structures in the brain are just starting to be understood, and are not limited to sensory neural networks that have inspired some of the recent development of deep learning. Other networks and algorithms that are now being explored include temporal neural networks, Bayesian neural algorithms, dynamic memory algorithms, cognitive inference algorithms, and self-organizing algorithms. The author suggests that future neuroscience research may continue to inspire the development of future computing paradigms that are fast, efficient, compact, and scalable.

The video overview of the article is available here

The complete article is available here

 

Probabilistic Bits - p-bits

Bridging the gap between classical bits and quantum bits

Classical computing is based on a bit, a device that can be either a ‘0’ or a ‘1’, but not both at the same time, which switches only when an operation occurs. In contrast, quantum computing is based on a quantum bit (q-bit or qubit), a device that is represented as a quantum superposition of ‘0’ and ‘1’ at the same time. A third type of device, distinct from the other two, is a classical probabilistic bit or p-bit, which naturally fluctuates between ‘0’ and ‘1’. A research group at Purdue University, under the direction of Prof. Supriyo Datta, has shown how these types of p-bits can provide the basis for a type of probabilistic computing.

The authors suggest that these devices can be implemented using low-barrier magnetic memory cells similar to those in conventional memory technologies. They further indicate that the p-bit may represent a “poor-man’s qubit”, and that systems of p-bits can be used to address some problems that might otherwise seem to require quantum computing, such as quantum annealing. Furthermore, they can also be used as binary stochastic neurons for stochastic machine learning.

The paper, “P-Bits for Probabilistic Spin Logic”, by Kerem Cansari, Brian Sutton, and Supriyo Datta, is available here.

A brief overview of this work is available here.

A video presentation by Prof. Datta on this topic is available here.

 

New Report on the Future of Heterogeneous Computing from US Dept. of Energy

Follows 2018 Workshop on Extreme Heterogeneity led by Oak Ridge National Lab

In the past decade, the nature of high performance computing has changed. Previously, HPC relied on CPUs, the performance of which was growing exponentially according to Moore’s Law. With that ending, continued growth in performance must rely on extremely heterogeneous computer architectures that incorporate increasing numbers of CPUs, GPUs, accelerators, FPGAs, connected with a variety of memory systems and interconnects. This has led to a series of challenges to HPC users, which must be addressed in order to use these new resources most effectively. This workshop, chaired by Dr. Jeffrey Vetter of Oak Ridge National Lab, identified areas of R&D to overcome these problems, and suggested that machine learning should be applied to optimizing the diversity of processors available to specific computations.

An overview of the report is presented here

Information on the DoE Workshop is available here

A complete copy of the report is available here

 

In-Memory Computing Challenges Come Into Focus

Researchers digging into ways around the von Neumann bottleneck.

Semiconductor Engineering online has a feature article on In-Memory Computing, available here.

This discusses a variety of developing memory technologies and applications that harness logic within the memory itself, rather than shuttling back and forth to a CPU. Redistribution of data has become the major bottleneck in performance in conventional von Neumann architectures. One class of in-memory computing consists of neural networks for pattern recognition, which have received great attention recently, and device technologies that can implement neural networks efficiently are being examined.

The article discusses research into new devices and architectures at HP, IBM, IMEC, Stanford, Berkeley, Michigan, Minnesota, and Tsinghua Universities. Both digital and analog solutions are being examined. Memory technologies include resistive RAMs (RRAMs), electrochemical RAMS (ECRAMs), and flash memories.

It is not yet clear which devices will be incorporated into next-generation computing systems, but widespread future demand for data analysis using neural network and other processors will be present from IoT and mobile devices all the way to data centers.

 

An Outlook for Quantum Computing

Proceedings of the IEEE recently published an overview of the present and future status of quantum computing, by Dmitri Maslov, Yunseong Nam, and Yungsang Kim, of the US National Science Foundation and IonQ, Inc.

Read the overview here

This work presents a qubit technology associated with trapped ions coupled by optical pulses. While this technology is different from the superconducting integrated circuit approach being pursued by other projects, it has advantages in not requiring deep cryogenic temperatures for operation, and also offers long coherent times. The ion traps can be microfabricated on a chip, as shown in the figure.

Current quantum computing technologies are noisy intermediate-scale quantum systems (NISQ), which cannot carry out desired quantum algorithms without quantum error correction, which is not yet available. The next major step is to demonstrate that a quantum computer can be used to solve a problem of practical utility that cannot otherwise be addressed, such as various kinds of quantum simulations. The transition of the proof-of-concept devices to useful computational systems faces a set of new technical challenges, ranging from improving and expanding qubit hardware to developing control/operating systems to innovations in algorithms and applications.

This issue of Proceedings of the IEEE also contains a set of other articles on alternative modes of computing. See here for the Table of Contents.

 

Artificial Synapses for AI

IEEE Spectrum describes recent progress in the development of nanoscale memory cells that may be applied to variable artificial synapses for artificial neural networks, reported here.

This describes work at IBM Research on an electrochemical random-access memory cell, or ECRAM, where a gate drives lithium ions into or out of a tungsten trioxide channel, changing the channel resistance. What is required for neural network applications is a precise change in resistance, depending on the drive voltage, which is rapid and repeatedly reversible. This was presented at the International Electron Device Meeting in San Francisco in December. Other related work reported at IEDM included novel ferroelectric FETs (FeFET) from Purdue University, University of Notre Dame, and Samsung, which may also be applied to chips for neural networks.