Feature Articles

 

Neural Algorithms and Computing Beyond Moore’s Law

A variety of novel algorithms can be obtained by observing the neural structure of different parts of the brain.

In the April issue of the Communications of the ACM, Dr. James Aimone of Sandia National Laboratory presented an overview of how neural structures in the brain are inspiring new architectures and algorithms for electronic computing. Many of these neural structures in the brain are just starting to be understood, and are not limited to sensory neural networks that have inspired some of the recent development of deep learning. Other networks and algorithms that are now being explored include temporal neural networks, Bayesian neural algorithms, dynamic memory algorithms, cognitive inference algorithms, and self-organizing algorithms. The author suggests that future neuroscience research may continue to inspire the development of future computing paradigms that are fast, efficient, compact, and scalable.

The video overview of the article is available here

The complete article is available here

 

Probabilistic Bits - p-bits

Bridging the gap between classical bits and quantum bits

Classical computing is based on a bit, a device that can be either a ‘0’ or a ‘1’, but not both at the same time, which switches only when an operation occurs. In contrast, quantum computing is based on a quantum bit (q-bit or qubit), a device that is represented as a quantum superposition of ‘0’ and ‘1’ at the same time. A third type of device, distinct from the other two, is a classical probabilistic bit or p-bit, which naturally fluctuates between ‘0’ and ‘1’. A research group at Purdue University, under the direction of Prof. Supriyo Datta, has shown how these types of p-bits can provide the basis for a type of probabilistic computing.

The authors suggest that these devices can be implemented using low-barrier magnetic memory cells similar to those in conventional memory technologies. They further indicate that the p-bit may represent a “poor-man’s qubit”, and that systems of p-bits can be used to address some problems that might otherwise seem to require quantum computing, such as quantum annealing. Furthermore, they can also be used as binary stochastic neurons for stochastic machine learning.

The paper, “P-Bits for Probabilistic Spin Logic”, by Kerem Cansari, Brian Sutton, and Supriyo Datta, is available here.

A brief overview of this work is available here.

A video presentation by Prof. Datta on this topic is available here.

 

New Report on the Future of Heterogeneous Computing from US Dept. of Energy

Follows 2018 Workshop on Extreme Heterogeneity led by Oak Ridge National Lab

In the past decade, the nature of high performance computing has changed. Previously, HPC relied on CPUs, the performance of which was growing exponentially according to Moore’s Law. With that ending, continued growth in performance must rely on extremely heterogeneous computer architectures that incorporate increasing numbers of CPUs, GPUs, accelerators, FPGAs, connected with a variety of memory systems and interconnects. This has led to a series of challenges to HPC users, which must be addressed in order to use these new resources most effectively. This workshop, chaired by Dr. Jeffrey Vetter of Oak Ridge National Lab, identified areas of R&D to overcome these problems, and suggested that machine learning should be applied to optimizing the diversity of processors available to specific computations.

An overview of the report is presented here

Information on the DoE Workshop is available here

A complete copy of the report is available here

 

In-Memory Computing Challenges Come Into Focus

Researchers digging into ways around the von Neumann bottleneck.

Semiconductor Engineering online has a feature article on In-Memory Computing, available here.

This discusses a variety of developing memory technologies and applications that harness logic within the memory itself, rather than shuttling back and forth to a CPU. Redistribution of data has become the major bottleneck in performance in conventional von Neumann architectures. One class of in-memory computing consists of neural networks for pattern recognition, which have received great attention recently, and device technologies that can implement neural networks efficiently are being examined.

The article discusses research into new devices and architectures at HP, IBM, IMEC, Stanford, Berkeley, Michigan, Minnesota, and Tsinghua Universities. Both digital and analog solutions are being examined. Memory technologies include resistive RAMs (RRAMs), electrochemical RAMS (ECRAMs), and flash memories.

It is not yet clear which devices will be incorporated into next-generation computing systems, but widespread future demand for data analysis using neural network and other processors will be present from IoT and mobile devices all the way to data centers.

 

An Outlook for Quantum Computing

Proceedings of the IEEE recently published an overview of the present and future status of quantum computing, by Dmitri Maslov, Yunseong Nam, and Yungsang Kim, of the US National Science Foundation and IonQ, Inc.

Read the overview here

This work presents a qubit technology associated with trapped ions coupled by optical pulses. While this technology is different from the superconducting integrated circuit approach being pursued by other projects, it has advantages in not requiring deep cryogenic temperatures for operation, and also offers long coherent times. The ion traps can be microfabricated on a chip, as shown in the figure.

Current quantum computing technologies are noisy intermediate-scale quantum systems (NISQ), which cannot carry out desired quantum algorithms without quantum error correction, which is not yet available. The next major step is to demonstrate that a quantum computer can be used to solve a problem of practical utility that cannot otherwise be addressed, such as various kinds of quantum simulations. The transition of the proof-of-concept devices to useful computational systems faces a set of new technical challenges, ranging from improving and expanding qubit hardware to developing control/operating systems to innovations in algorithms and applications.

This issue of Proceedings of the IEEE also contains a set of other articles on alternative modes of computing. See here for the Table of Contents.

 

Artificial Synapses for AI

IEEE Spectrum describes recent progress in the development of nanoscale memory cells that may be applied to variable artificial synapses for artificial neural networks, reported here.

This describes work at IBM Research on an electrochemical random-access memory cell, or ECRAM, where a gate drives lithium ions into or out of a tungsten trioxide channel, changing the channel resistance. What is required for neural network applications is a precise change in resistance, depending on the drive voltage, which is rapid and repeatedly reversible. This was presented at the International Electron Device Meeting in San Francisco in December. Other related work reported at IEDM included novel ferroelectric FETs (FeFET) from Purdue University, University of Notre Dame, and Samsung, which may also be applied to chips for neural networks.