Next-Gen Chips Will Be Powered From Below - Redesign of power lines in state-of-the-art silicon chips will enable increased energy efficiency, permitting Moore's Law to continue a bit longer.
Cerebras' Tech Trains “Brain-Scale” AIs - A dedicated AI training system, containing a wafer-scale chip, further redesigns the memory architecture to enhance the scale and speed of machine learning.
Can Software Performance Engineering Save Us From the End of Moore's Law? - Although custom optimization of software for modern platforms can be difficult, performance improvements can be quite substantial.
The Future of Deep Learning is Photonic - Integrated optical neural networks may perform analog matrix multiplications with much less energy than electronics.
Can Cryoptocurrencies Be More Energy-Efficient? - Blockchain computations are notoriously wasteful of energy, but alternatives are being developed.
New fast non-volatile 2D heterostructure memory - Nanosecond read or write time based on charge storage in graphene layers.
Classical control chips for quantum computing - Intel, Google, and Microsoft develop cryogenic transistors for even colder qubits
New Wafer-Scale Chip for AI - 300 mm wafer with 7nm technology has more than 2 trillion transistors and 40 GBytes of memory.
Deep Learning at the Speed of Light - New integrated optical chip as analog neural network for AI.
European Supercomputer Consortium Lays out Future Roadmap - Exascale systems targeted for 2023-2026.
Los Alamos Develops Binary-to-DNA Translator - New software package for long-term data storage and retrieval.
Advances in Machine Learning and Deep Neural Networks
The Proceedings of the IEEE has a special issue (May 2021) on Machine Learning (ML). The Guest Editors (R. Chellappa, S. Theodoridis, A. van Schaik) have presented an overview of the articles in this special issue.
ML has made great strides in the past decade and is now one of the major applications of computing. In most cases, this has been achieved using deep neural networks, where the interconnections between the neurons are obtained automatically by extensive training using large quantities of real data. This is extremely compute-intensive, and may be limiting the future evolution of ML.
The editors have selected 14 articles on a wide range of research approaches to improve the performance of ML systems. These cover theory, applications, and hardware implementations. Topics include causal inference, anomaly detection, neuromorphic chips, and application to medical imaging. The Table of Contents for the entire issue is available here.
Brion Moyer, Semiconductor Engineering
Specialized AI chips are becoming major components of computer systems, from large data centers to small edge devices. Reducing energy consumption is critically important in both limits.
This article reviews 11 approaches to energy reduction, covering the range from devices to circuits to architectures to software. These are mostly deep learning algorithms applied to neural networks, but there are a wide range of ways to do this efficiently.
- Smaller models.
- Moving less data.
- Less computing.
- Batching helps.
- Data formats matter.
- Sparsity can help.
- Use compression.
- Focus on events.
- Use analog circuitry.
- Use photons instead of electrons.
- Optimize for your hardware and software.
Obviously, one cannot use all of these in the same AI processor. But in a specific edge device with a specific AI application, there may be a variety of ways to achieve orders of magnitude reduction in power consumption, without sacrificing performance or speed.
The Hard Tech Revolutionizing Computing: A Guided Journey of IBM Research
Dr. Dario Gil, Director of IBM Research
IBM Research has one of the world’s largest programs in R&D of advanced computing. The Director of IBM Research is Dr. Dario Gil . In May, Dr. Gil led an online presentation with featured interviews with leading IBM researchers in several areas of computing research, which is now available as a YouTube video.
This video is segmented into several parts:
1) Opening remarks
2) Overview of Advanced Semiconductor Fabrication
3) Quantum Computing
4) AI Language Processing Applied to Coding
5) The Hybrid Cloud
Regarding semiconductor fabrication, Dr. Gil and colleagues described recent 7 nm and 5 nm processes, leading to the newly developing 2 nm process. They also featured IBM System Z servers for data centers and cloud computing.
Quantum computing at IBM is based on a developing technology of superconducting qubits, cooled to ultralow temperatures of 15 millidegrees Kelvin. But these systems are interfaced with and controlled by classical computers, which in turn can be accessed remotely by researchers around the world. These hybrid classical/quantum computing systems promise tremendous performance enhancements in future computing.
The use of AI for natural language translation is well known. IBM has a research project examining legacy computer code, of which millions of lines exist. Similar language translation techniques are being used to translate, categorize, and reorganize this code, enabling it to be updated without massive programmer involvement. Similar techniques are also being applied to chemistry databases, which are full of non-text symbols and diagrams.
Finally, IBM researchers are working on developing universal high-level techniques to interact with the “hybrid cloud” as if it were a single infinite computer.
- 2020 CCC Workshop on Physics and Engineering Issues in Reversible/Adiabatic Classical Computing
- Rebooting Computing Video Overview
- IEEE Future Directions
- IEEE Future Directions Blog
- Computing in Science and Engineering on the End of Moore's Law
- IEEE Journal of Exploratory Solid-State Computational Devices and Circuits (JXCDC)
- Arch2030 Workshop Report (PDF, 948 KB)
- Workshop on Neuromorphic Computing
- Workshop on Beyond CMOS Technology
- Update on National Strategic Computing Initiative (NSCI)
- RC White Paper on Nanocomputers
- IEEE Computer Magazine on Rebooting Computing