Feature Articles - 2017

 

Can AI Be Taught to Explain Itself?

Artificial Intelligence and Machine Learning have recently received a great deal of attention in the general press as well as in technical media, focusing on successes in techniques such as Deep Learning for analyzing large data sets. A recent article in the New York Times Magazine addresses a key issue with such techniques: in many cases, users do not understand the basis for decisions that an AI system makes, and the AI system does not provide an explanation. The field of Explainable AI (XAI) has recently started to address this, by providing insights into the decision-making process. Without such insights, people may not have confidence that Deep Learning can be trusted to make unbiased decisions in applications as varied as Autonomous Vehicles and Medical Diagnostics.

Another recent article in the New York Times was entitled, “Building A.I. That Can Build A.I.”, addressing efforts by Google, Microsoft, and others to accelerate applications of AI by automating the process of machine learning.

 

Survey of Neuromorphic Computing and Neural Networks in Hardware

A research group at Oak Ridge National Laboratory has provided an overview of various approaches to neuromorphic (or brain-inspired) computing, and implications for future generations of computers.

This reviews artificial neurons and neural networks, algorithms for learning, implementation using various novel device technologies, software and design tools, and applications. Hardware approaches may be analog, digital, or a hybrid of the two, and embody non-von-Neumann architectures in a way that is massively parallel and energy-efficient. Learning may be supervised or unsupervised. Promising applications include a variety of pattern recognition tasks, language processing, and robotic control. There have been major advances in recent years, but the field is still relatively immature, in that there is not yet agreement on dominant trends. The future will likely include development of neuromorphic coprocessors, with a range of new applications still to be developed.

For further information, see here.

 

The Future of Computing Depends on Making It Reversible

All current transistor logic gates dissipate significant amounts of power, so that extracting excess heat is becoming a major issue limiting computer performance. These gates are irreversible, both logically and physically; one cannot operate them in the reverse direction. However, there has long been academic interest in reversible computing, where the power dissipation can be orders of magnitude smaller. Now that conventional electronics may be approaching limits, research in reversible computing is being revived in a range of device technologies, by groups worldwide. In the September issue of IEEE Spectrum, Dr. Michael Frank of Sandia National Laboratory reviews the field of reversible computing, which may provide a future way around current limitations in computing performance.

For further information, see here.

Another overview of Reversible Computing was recently presented in IEEE Computer here.

 

The Future of Transistors: What's After FinFETs?

Current state-of-the-art transistors at 10 nm scale are based on the finFET, a 3D transistor geometry. Looking ahead to 5 nm and smaller, future options may include complementary FETs, tunnel FETs, and vertical nanowires. Both lithography and materials issues will be challenging. Most of the major chip manufacturers are exploring some of these, including Samsung, IBM, Intel, GlobalFoundries, IMEC, and Applied Materials.

For further information, see here.

 

Exponential Laws of Computing Growth

Peter Dennard and Ted Lewis recently described a generalized set of exponential laws for the growth of performance in information technology, beyond the chip-level scaling of Moore’s Law. They argue that repeated technology jumps at device, systems, and application levels have enabled this exponential growth, and that similar growth is likely to continue for decades to come. For the complete paper, see the feature article in the Jan. 2017 issue of the Communications of the ACM here.

The authors also summarize their analysis in the following video.

For other views on the past and future of Moore’s Law and related growth curves, see the 2015 Special Report on 50 Years of Moore’s Law in IEEE Spectrum here.

 

Modeling Vertical Tunnel FET

The Tunnel FET or TFET has been identified as a possible future elementary transistor for future fast low-power applications, surpassing CMOS performance. Min and Asbeck of the University of California at San Diego have simulated such a device in a novel vertical configuration, and report the results in the IEEE Journal of Exploratory Solid-State Computational Devices and Circuits (JXCDC): “Compact Modeling of Distributed Effects in 2-D Vertical Tunnel FETs and Their Impact on DC and RF Performances.” Their results indicate an 800 GHz cutoff frequency for a 20 nm channel length, even including parasitics. This is highly encouraging for high-frequency analog and digital applications.

 

Memory-Driven Computing

Hewlett Packard Enterprise (HPE) has been developing a new paradigm for high-performance computing associated with Big Data, whereby an enormous multi-terabyte (TB) pool of memory is shared by many processors. This approach is known as Memory-Driven Computing, or MDC, and the project is called “The Machine”. In the latest version of the prototype, 160 TB of fast non-volatile memory is connected to forty 32-core processors via fast photonic interconnects.

An overview of the prototype is provided here.

Some potential Big-Data applications are discussed here.

For more information from HPE on Memory-Driven Computing, see here.

 

IEEE Rebooting Computing Initiative Prepares for the End of Moore’s Law

Tom Conte, professor of Computer Science at Georgia Institute of Technology and co-chair of the IEEE Rebooting Computing (IEEE RC) Initiative, discusses how the International Roadmap for Devices and Systems (IRDS™), supported by IEEE RC, intends to guide the computing industry beyond the limitations of Moore’s Law.

 

Machine Learning Overview

Semiconductor Engineering presents an overview of recent trends in machine learning, based in part on a recent market survey. The field is undergoing a renaissance, with a variety of diverse approaches on both hardware and software levels. Systems may include heterogeneous mixtures of CPUs, GPUs, DPSs, FPGAs, and ASICs. Optimized approaches may differ between the initial learning phase and the subsequent interpretation phase (inferencing and estimation). Major current applications include autonomous vehicles and cloud-based artificial intelligence, but many other applications are starting to develop.

Read more.

 

Improving Energy Efficiency and Exploiting Parallelism with Processing in Memory (PIM) and Near-Data Processing (NDP)

Computing Now, Guest Editors’ Introduction by Kevin Rudd and Richard Murphy
The authors point out that moving data between logic and memory modules is now the major source of delay and power consumption. Given new memory technologies, alternative architectures such as PIM and NDP offer opportunities for substantial enhancement in performance as well as energy efficiency for memory-intensive problems, such as those associated with Big Data.

For the article and a list of further articles and resources, see here.

 

Forget Scaling. Moore's Law Panel Talks Power Consumption.

A panel of semiconductor experts at the South by Southwest festival sees a future that favors power-efficient computers instead of ones with smaller transistors.

Read the article here.

 

Beyond Moore's Law

In this Engadget article, Tom Conte examines Moore’s Law, the evolution of computer circuitry, and the switch to multicore. In order to keep delivering on computing advancements, a fundamentally different approach to computing is required. The IEEE Rebooting Computing Initiative was created to study these next-gen alternatives.

Read the article here.

 

What’s Next for Transistors and Systems

The online magazine Semiconductor Engineering interviewed several leaders in Chip Fabrication, focusing on future trends in nanodevices, alternative technologies, neuromorphic architectures, and advanced packaging. The upshot is that a variety of techniques will be needed for different applications in the next decade.

Read the article here.

 

Rebooting Computing: Developing a Roadmap for the Future of the Computer Industry

Members of the RC Steering Committee contributed an article to a special issue of Mondo Digitale, a trade magazine sponsored by the Italian Association for Informatics and Automatic Computation. The article reviewed how both the RC Initiative and the new IRDS™ Roadmap are working to reinvent the field of computing to maintain exponential enhancement in performance beyond the end of Moore’s Law scaling. Most of the article is in English.

Read the article here.

Other articles in this issue are available here.