Feature Articles - 2016

 

Brain-Inspired Machines: What exactly are we looking for?

This feature article in IEEE Pulse Magazine addressed the subject of brain-inspired or neuromorphic computing. This is a broad concept that covers many different device and architectural approaches to computing. Researchers are not actually trying to design a human brain, which in any case we don’t fully understand. But unlike traditional computers, which follow the Von Neumann digital architecture, brains combine analog and digital, logic and memory, are massively parallel, energy efficient, tolerant of noise and defects, and are self-programming. Furthermore, they are optimized for problems (such as image and language processing) that can be difficult for traditional computers. We can expect that brain and computer research will continue to inspire each other for decades to come.

Read the article here.

There are also many articles on brain-inspired or neuromorphic computing in the proceedings of the 1st IEEE International Conference on Rebooting Computing (ICRC 2016). See here for a list of papers.

 

Roadmap for Disruptive Computer Technologies

A new European Center for Excellence in High-Performance Computing (Eurolab-4-HPC) has been created to identify new research directions for future computing, particularly for continued progress in the post-exascale era, 2022-2030. An overview of recent reports from Eurolab is given here. Technologies addressed include: CMOS scaling, die stacking and 3D chip technologies, Non-volatile Memory (NVM) technologies, Photonics, Resistive Computing, Neuromorphic Computing, Quantum Computing, Nanotubes, Graphene and Diamond Transistors.

One report is entitled “Preliminary Eurolab-4-HPC Roadmap”
The other is “Report on Disruptive Technologies for the years 2022 – 2030”
The lead author in both reports is Prof. Theo Ungerer of the University of Augsburg, Germany.

 

Impact of Future Technologies on Computer Architecture

This article in IEEE Micro provides the transcript of a panel discussion at last year’s 4th Workshop on Computer Architecture Research Directions (CARD 2015), held in Portland, Oregon in conjunction with the International Symposium on Computer Architecture (ISCA 2015). Panelists Fred Chong and Igor Markov were asked to comment on the impact of a variety of new technologies and approaches on future computer architectures, including quantum computing, approximate computing, and new memory devices. Prof. Chong was relatively optimistic about all of these, while Prof. Markov was more pessimistic.

Read more

A video of the panel presentation is available here.

 

Neuromorphic Computing and the Brain

This article in HPCWire.com provides an overview of a presentation on Neuromorphic Computing and the Brain by Prof. Karlheinz Meier of Heidelberg University, at the International Supercomputing Conference (ISC-2016) held in Frankfurt, Germany in June 2016. The article describes one of the key problems in Neuromorphic Computing, the training of the neural networks, which is orders of magnitude slower than the elementary devices themselves.

Read more

A video of Prof. Meier’s ISC presentation is available online here.

An overview of ISC-2016 is available here.

 

Ultra-Low-Power Chips

This article in IEEE Design and Test Magazine focuses on ultra-low-power chips for the Internet of Things (IoT). These may include not only digital logic, but also non-volatile memory, analog components, and sensors. Power consumption may be minimized to such a level that energy harvesting techniques may be sufficient for operation. For IoT, low cost and low power may be more critical than traditional metrics of speed and performance.

Read more

 

Near-Threshold Computing

This online article in Semiconducting Engineering features “Near-Threshold Computing,” whereby the transistors operate at reduced voltage near the threshold for switching. This can sharply reduce system power, which is critical for applications such as mobile computing and the Internet of Things. The device speed may be reduced somewhat, but that may be acceptable for some applications. However, this also increases device variability and reliability, which must be properly modeled by the design tools. To optimize power reduction, the reduced-voltage must apply to both logic and memory circuits. While these approaches have been known for some time, they are now becoming more widespread in a variety of applications.

Read more

 

Brain-Like Computing

This online article in Computing Now, the monthly online magazine of the IEEE Computer Society, features Neuromorphic Computing, which uses brain-inspired microarchitectures to attempt to generate brain-like performance with applications such as machine learning, image recognition, and cognitive computing. This article links to several other recent research articles in this field, on both hardware and software levels, including “Architecting for Causal Intelligence at Nanoscale,” by S. Khasanvis et al.

Read article “Architecting for Causal Intelligence at Nanoscale” here.

 

Energy Limits of Future Computing

This online article in Semiconductor Engineering (SemiEngineering.com) reviews the scaling arguments of a recent analysis from SIA and SRC (“Rebooting the IT Revolution: A Call to Action”). This projects that energy dissipation in computing will reach limits in several decades, unless there are major fundamental changes.

The full SIA/SRC report is available here.

 

Nanomagnets and the Limits of Low-Power Computing

This article in IEEE Spectrum addresses the fundamental low-power limit of switching a bit, featuring a recent experimental demonstration using 100-nm nanoscale magnets. The switching energy closely approaches the theoretical limit of kTln(2), which is orders of magnitude smaller than the switching energy in conventional transistors.

The research article from the University of California at Berkeley is available here.

 

Artificial Neural Networks and Deep Learning

This article in IEEE Spectrum reviews recent progress and future prospects of artificial neural networks to solve computationally difficult problems as diverse as pattern recognition, language translation, and medical diagnostics. These electronic systems are inspired by biological systems but do not necessarily emulate them in detail. The article describes progress in “recurrent networks” that incorporate feedback in a way that provides short-term memory. This enables powerful and efficient Deep Learning without the need for direct programming. Further description of recurrent networks is given here.

 

Decoupled Control and Data Processing for Approximate Near-Threshold-Voltage Computing

As device voltage is reduced in order to reduce system power, circuit reliability may be degraded. This article by Akturk, Kim, and Karpuzko in IEEE Micro Magazine proposes to design a processor so that error-tolerant data processing can be operated using low-power cores, while high-reliability cores are reserved for less error-tolerant control algorithms. This provides an example of Heterogeneous Computing, which is also the topic of other articles in this special issue. See here for the other articles in this issue.