New fast non-volatile 2D heterostructure memory - Nanosecond read or write time based on charge storage in graphene layers.
Classical control chips for quantum computing - Intel, Google, and Microsoft develop cryogenic transistors for even colder qubits
New Wafer-Scale Chip for AI - 300 mm wafer with 7nm technology has more than 2 trillion transistors and 40 GBytes of memory.
Deep Learning at the Speed of Light - New integrated optical chip as analog neural network for AI.
European Supercomputer Consortium Lays out Future Roadmap - Exascale systems targeted for 2023-2026.
Los Alamos Develops Binary-to-DNA Translator - New software package for long-term data storage and retrieval.
- IBM reports improved performance of high-speed, high-density memories - Spin-transfer-torque MRAM embedded in 14 nm circuits demonstrated with ns switching.
- 3D integration of heterogeneous chips for exascale processors - CEA (France) develops chiplet-level packaging with increased bandwidth and density.
- Graphene-Based Memristors for Improved Neural Networks - Penn State researchers demonstrate synapses with multiple synaptic weights
- Intel develops new cryptography protocols - Designed to be resistant to hacking by future quantum computers
- New memristor device acts like biological neuron - May provide basic element for neuromorphic computing
- Record speeds for AI inferencing - Nvidia GPU exceeds prior benchmark performance
- Analog Optical Computing Chip for Neural Networks - Carries out multiply-accumulate computations in silicon photonic chip for AI.
Brion Moyer, Semiconductor Engineering
Specialized AI chips are becoming major components of computer systems, from large data centers to small edge devices. Reducing energy consumption is critically important in both limits.
This article reviews 11 approaches to energy reduction, covering the range from devices to circuits to architectures to software. These are mostly deep learning algorithms applied to neural networks, but there are a wide range of ways to do this efficiently.
- Smaller models.
- Moving less data.
- Less computing.
- Batching helps.
- Data formats matter.
- Sparsity can help.
- Use compression.
- Focus on events.
- Use analog circuitry.
- Use photons instead of electrons.
- Optimize for your hardware and software.
Obviously, one cannot use all of these in the same AI processor. But in a specific edge device with a specific AI application, there may be a variety of ways to achieve orders of magnitude reduction in power consumption, without sacrificing performance or speed.
Final Report of US National Security Commission on Artificial Intelligence (NSCAI)
Last year, the US government appointed an independent commission, NSCAI, to evaluate the impact of AI technologies on national security. The Chair is Dr. Eric Schmidt, formerly the CEO and Chairman of Google.
The commission recently issued an extensive final report with 750 pages, covering a wide range of topics related to AI, defense, cybersecurity, and other topics. View the Table of Contents which includes links to the various chapters. View summarized parts of the report here.
The report makes a number of recommendations to the US government, regarding increased funding in R&D and in education in AI and in microelectronics, which it states are necessary if the US is to remain a world leader in these fields. Given its focus on national security, it points out the importance of these fields for national defense and national intelligence, as well as commercial electronic technologies. Each chapter includes a “Blueprint for Action” with specific suggestions and timetables. These cover not only government actions, but also recommendations for industry and for academia.
High-Performance Computing After Moore’s Law
One of these talks was presented by Prof. Tom Conte of Georgia Tech, and is available here. Prof. Conte is the director of the Georgia Tech Center for Research into Novel Computer Hierarchies (CRNCH). Prof. Conte was also a founding co-chair of the Rebooting Computing Initiative and the International Roadmap for Devices and Systems (IRDS), as well as a former president of the IEEE Computer Society.
Prof. Conte presented an overview of the trends in future computing, building on topics addressed by Rebooting Computing, IRDS, and the Computing Community Consortium (CCC).
More specifically, he indicated that while the traditional Moore’s Law was dominated by simple reduction in linear transistor scale, future developments in computing technology will be driven by 4 distinct approaches. The first level, “More Moore”, will use 3D integration to maintain increasing numbers of transistors on a chip, even if their shrinkage is slowing. The second level will incorporate new devices and circuits in a way that would likely be hidden to the end user. These might include superconducting circuits, reversible circuits, new memory technologies, or circuits tolerant of noisy devices. The third level would include significant architecture changes, such as a variety of accelerator chips (GPU, TPU, FPGA, etc.) which will need control up to the software level in order to properly coordinate with the CPU.
Finally, the greatest changes would be incorporating non-von Neumann computing paradigms for certain types of computing problems. These include not only a revival of analog computing, but also the future introduction of quantum computing and thermodynamic computing.
This dynamic landscape will enable continued improvements in computer performance for the next decade and beyond.
US Dept. of Energy Exascale Computing Project (ECP)
Exascale Computers are the next generation of supercomputers being developed for advanced simulations and scientific computing. These computers take advantage of massive computational parallelism to achieve 1018 floating point operations per sec (FLOPS), where exa is the metric prefix for 1018. They have 1000x more computational capacity than the earlier generation of Petaflops computers, and about 50x more than current state-of-the-art systems. Major projects are being developed by government/industry collaborations in the US, China, Japan, and the European Union, with system delivery projected in the next few years.
One such national project is the Exascale Computing Project (ECP) coordinated by the US Dept. of Energy. A recent overview of this project was presented by Dr. Lori Diachin of Lawrence Livermore Laboratory (LLNL), the Deputy Director of the ECP. The 17-minute video is available here.
Dr. Diachin described the system being constructed for LLNL, known as El Capitan, with deployment projected in 2023, and performance of 1.5 exaFLOPs. This will be very heterogeneous on both hardware and software levels, with thousands of both CPU and GPU chips from AMD, integrated by Cray (HPE). Design for fast high-bandwidth data exchange is critical for the required performance.
These exascale computers will be applied to a variety of computationally difficult problems, including climate change, nuclear weapons, materials development, and big data analytics.
- 2020 CCC Workshop on Physics and Engineering Issues in Reversible/Adiabatic Classical Computing
- Rebooting Computing Video Overview
- IEEE Future Directions
- IEEE Future Directions Blog
- Computing in Science and Engineering on the End of Moore's Law
- IEEE Journal of Exploratory Solid-State Computational Devices and Circuits (JXCDC)
- Arch2030 Workshop Report (PDF, 948 KB)
- Workshop on Neuromorphic Computing
- Workshop on Beyond CMOS Technology
- Update on National Strategic Computing Initiative (NSCI)
- RC White Paper on Nanocomputers
- IEEE Computer Magazine on Rebooting Computing