What's New

Feature Article

 

11 Ways to Reduce AI Energy Consumption

Brion Moyer, Semiconductor Engineering

Specialized AI chips are becoming major components of computer systems, from large data centers to small edge devices. Reducing energy consumption is critically important in both limits.

This article reviews 11 approaches to energy reduction, covering the range from devices to circuits to architectures to software.  These are mostly deep learning algorithms applied to neural networks, but there are a wide range of ways to do this efficiently.

  1. Smaller models.
  2. Moving less data.
  3. Less computing.
  4. Batching helps.
  5. Data formats matter.
  6. Sparsity can help.
  7. Use compression.
  8. Focus on events.
  9. Use analog circuitry.
  10. Use photons instead of electrons.
  11. Optimize for your hardware and software.

Obviously, one cannot use all of these in the same AI processor.  But in a specific edge device with a specific AI application, there may be a variety of ways to achieve orders of magnitude reduction in power consumption, without sacrificing performance or speed.

 

Final Report of US National Security Commission on Artificial Intelligence (NSCAI)

Last year, the US government appointed an independent commission, NSCAI, to evaluate the impact of AI technologies on national security.  The Chair is Dr. Eric Schmidt, formerly the CEO and Chairman of Google.

The commission recently issued an extensive final report with 750 pages, covering a wide range of topics related to AI, defense, cybersecurity, and other topics.  View the Table of Contents which includes links to the various chapters.  View summarized parts of the report here.

The report makes a number of recommendations to the US government, regarding increased funding in R&D and in education in AI and in microelectronics, which it states are necessary if the US is to remain a world leader in these fields.  Given its focus on national security, it points out the importance of these fields for national defense and national intelligence, as well as commercial electronic technologies.  Each chapter includes a “Blueprint for Action” with specific suggestions and timetables.  These cover not only government actions, but also recommendations for industry and for academia.

 

 

Technology Spotlight

Tom Conte

High-Performance Computing After Moore’s Law

The virtual Supercomputing Conference SC20 was held Nov. 16-19, 2020, co-sponsored by the IEEE Computer Society.  Many of the invited talks at SC20 are now available on YouTube

One of these talks was presented by Prof. Tom Conte of Georgia Tech, and is available here. Prof. Conte is the director of the Georgia Tech Center for Research into Novel Computer Hierarchies (CRNCH).  Prof. Conte was also a founding co-chair of the Rebooting Computing Initiative and the International Roadmap for Devices and Systems (IRDS), as well as a former president of the IEEE Computer Society. 

Prof. Conte presented an overview of the trends in future computing, building on topics addressed by Rebooting Computing, IRDS, and the Computing Community Consortium (CCC).

More specifically, he indicated that while the traditional Moore’s Law was dominated by simple reduction in linear transistor scale, future developments in computing technology will be driven by 4 distinct approaches.  The first level, “More Moore”, will use 3D integration to maintain increasing numbers of transistors on a chip, even if their shrinkage is slowing.  The second level will incorporate new devices and circuits in a way that would likely be hidden to the end user.  These might include superconducting circuits, reversible circuits, new memory technologies, or circuits tolerant of noisy devices.  The third level would include significant architecture changes, such as a variety of accelerator chips (GPU, TPU, FPGA, etc.) which will need control up to the software level in order to properly coordinate with the CPU.

Finally, the greatest changes would be incorporating non-von Neumann computing paradigms for certain types of computing problems.  These include not only a revival of analog computing, but also the future introduction of quantum computing and thermodynamic computing.

This dynamic landscape will enable continued improvements in computer performance for the next decade and beyond.

 

US Dept. of Energy Exascale Computing Project (ECP)

Exascale Computers are the next generation of supercomputers being developed for advanced simulations and scientific computing.  These computers take advantage of massive computational parallelism to achieve 1018 floating point operations per sec (FLOPS), where exa is the metric prefix for 1018.  They have 1000x more computational capacity than the earlier generation of Petaflops computers, and about 50x more than current state-of-the-art systems.  Major projects are being developed by government/industry collaborations in the US, China, Japan, and the European Union, with system delivery projected in the next few years.

One such national project is the Exascale Computing Project (ECP) coordinated by the US Dept. of Energy.  A recent overview of this project was presented by Dr. Lori Diachin of Lawrence Livermore Laboratory (LLNL), the Deputy Director of the ECP. The 17-minute video is available here.

Dr. Diachin described the system being constructed for LLNL, known as El Capitan, with deployment projected in 2023, and performance of 1.5 exaFLOPs.  This will be very heterogeneous on both hardware and software levels, with thousands of both CPU and GPU chips from AMD, integrated by Cray (HPE).  Design for fast high-bandwidth data exchange is critical for the required performance.

These exascale computers will be applied to a variety of computationally difficult problems, including climate change, nuclear weapons, materials development, and big data analytics.