What's New

Feature Article

Featured Article - July 2021

Advances in Machine Learning and Deep Neural Networks

The Proceedings of the IEEE  has a special issue (May 2021) on Machine Learning (ML).  The Guest Editors (R. Chellappa, S. Theodoridis, A. van Schaik) have presented an overview of the articles in this special issue.

ML has made great strides in the past decade and is now one of the major applications of computing.  In most cases, this has been achieved using deep neural networks, where the interconnections between the neurons are obtained automatically by extensive training using large quantities of real data.  This is extremely compute-intensive, and may be limiting the future evolution of ML.

The editors have selected 14 articles on a wide range of research approaches to improve the performance of ML systems.  These cover theory, applications, and hardware implementations.  Topics include causal inference, anomaly detection, neuromorphic chips, and application to medical imaging.  The Table of Contents for the entire issue is available here.

 

11 Ways to Reduce AI Energy Consumption

Brion Moyer, Semiconductor Engineering

Specialized AI chips are becoming major components of computer systems, from large data centers to small edge devices. Reducing energy consumption is critically important in both limits.

This article reviews 11 approaches to energy reduction, covering the range from devices to circuits to architectures to software.  These are mostly deep learning algorithms applied to neural networks, but there are a wide range of ways to do this efficiently.

  1. Smaller models.
  2. Moving less data.
  3. Less computing.
  4. Batching helps.
  5. Data formats matter.
  6. Sparsity can help.
  7. Use compression.
  8. Focus on events.
  9. Use analog circuitry.
  10. Use photons instead of electrons.
  11. Optimize for your hardware and software.

Obviously, one cannot use all of these in the same AI processor.  But in a specific edge device with a specific AI application, there may be a variety of ways to achieve orders of magnitude reduction in power consumption, without sacrificing performance or speed.

 

 

Technology Spotlight

Dr. Gil July

The Hard Tech Revolutionizing Computing:  A Guided Journey of IBM Research

Dr. Dario Gil, Director of IBM Research

IBM Research has one of the world’s largest programs in R&D of advanced computing.  The Director of IBM Research is Dr. Dario Gil . In May, Dr. Gil led an online presentation with featured interviews with leading IBM researchers in several areas of computing research, which is now available as a YouTube video.

This video is segmented into several parts:

1)  Opening remarks

2)  Overview of Advanced Semiconductor Fabrication

3)  Quantum Computing

4)  AI Language Processing Applied to Coding

5)  The Hybrid Cloud

Regarding semiconductor fabrication, Dr. Gil and colleagues described recent 7 nm and 5 nm processes, leading to the newly developing 2 nm process.  They also featured IBM System Z servers for data centers and cloud computing.

Quantum computing at IBM is based on a developing technology of superconducting qubits, cooled to ultralow temperatures of 15 millidegrees Kelvin.  But these systems are interfaced with and controlled by classical computers, which in turn can be accessed remotely by researchers around the world.  These hybrid classical/quantum computing systems promise tremendous performance enhancements in future computing.

The use of AI for natural language translation is well known.  IBM has a research project examining legacy computer code, of which millions of lines exist.  Similar language translation techniques are being used to translate, categorize, and reorganize this code, enabling it to be updated without massive programmer involvement.  Similar techniques are also being applied to chemistry databases, which are full of non-text symbols and diagrams.

Finally, IBM researchers are working on developing universal high-level techniques to interact with the “hybrid cloud” as if it were a single infinite computer.