Feature Articles - 2018

 

2018 Low-Power Image Recognition Challenge

Report on LPIRC 2018, held in Salt Lake City, Utah, 18 June 2018.

S. Alyamkin, et al (41 co-authors)

A preprint of the paper is available here (PDF, 812 KB).

Computer vision is widely used in many battery-powered systems, and the need for low-power computer vision will become increasingly important. The international LPIRC competition has been held annually since 2015, co-sponsored by IEEE Rebooting Computing. For a video introduction to LPIRC, see here.

The winning entries in the competition identify the best technologies that can classify and detect objects in images both efficiently (short execution time and low energy consumption) and accurately (high precision). Over the past four years, the winners’ scores have improved more than 24 times, with further improvement expected in the future.

The 2018 competition was co-sponsored by Google and Facebook, as well as Purdue, Duke, and UNC. This paper reviews LPIRC 2018 by describing the three different tracks and the winners’ solutions. Winners included teams from Qualcomm, Seoul National University, and ETRI/KPST (S. Korea).

 

A Domain-Specific Architecture for Deep Neural Networks

N. Jouppi, et al., Google

Communications of the ACM, September 2018

Google engineers apply new Tensor Processing Unit (TPU) chips to neural network applications, up to 30 times faster and up to 80 times more energy efficient than competing chips.

Read the article here.

These improvements have been applied to real-time data center applications of deep neural networks, such as image recognition, language translation, and search. The applications are designed using open-source TensorFlow software.

A brief video of some of the authors discussing this work is also available here.

 

IRDS™ Roadmap: Beyond CMOS

While CMOS will continue to be central to mainstream electronics in the next decade, the International Roadmap for Devices and Systems has a chapter on new devices and systems that will permit continued performance improvements in future decades.

The latest release of the IRDS™ Roadmap includes a major chapter on novel devices and systems that go beyond what conventional CMOS circuits can achieve. These emerging research devices include a variety of memory devices, novel transistor devices from carbon nanotubes to spintronics, and cryogenic and superconducting electronics. Novel systems are also previewed, including analog, neuromorphic, reversible, probabilistic, coupled oscillators, and quantum circuits and architectures. While roadmap projections are difficult for emerging technologies, alternative technologies are compared on the basis of power and speed for key applications. The chapter ends with a complete listing of more than 1000 recent citations.

 

Big Changes for Mainstream Chip Architectures

Using AI and non-von-Neumann architectures to continue exponential improvement in system performance

This special report from Semiconductor Engineering presents an overview of innovations in processor and memory architectures from several companies, intended to continue enhanced system performance over the next decade. Some of these innovations were presented at the Hot Chips Symposium held recently in Silicon Valley.

The new focus is not so much on process development, as on redesigning chip architectures and improving chip packaging, in order to optimize performance for particular applications. For example, edge devices will need to pre-process massive amounts of data, rather than sending all of this to the cloud. Furthermore, artificial intelligence built on neural networks will be distributed across processors and memories to enable learning to optimize scheduling and assignment among multiple devices on and off the chips. This will further deviate from the classic separation of logic and memory as in the Von Neumann architecture.

In this way, chip manufacturers anticipate that they will be able to double system performance every two years, even while the classic process-based Moore’s Law enhancements have mostly saturated.

 

IRDS™ Executive Summary

The IEEE International Roadmap for Devices and Systems (IRDS™) recently released its latest roadmap, which was announced in a press release. This Roadmap consists of 15 extensive chapters on a wide variety of topics relevant to the semiconductor and computer industries, from devices to lithography to circuits to applications, but perhaps the place to start is the 30-page overview provided by the Executive Summary (PDF, 3 MB).

The Executive Summary reviews how the success of Moore’s Law in the semiconductor industry over several decades led to the current situation, where new types of scaling and new architectures are needed to maintain performance improvement over the next two decades. These include 3D power scaling and close integration of memory and logic, as well as focusing on the needs of particular application sectors, including mobile, internet of things (IoT), the cloud, and cyber-physical systems. While conventional CMOS processors will continue to dominate the industry, new computing modalities will gain traction as accelerators in some applications, including neuromorphic, approximate, cryogenic, and quantum computing. Although all of these present major challenges, it is projected that continued exponential improvement in system performance is likely at least for the next 15 years.

 

Intel’s New Path to Quantum Computing

In this article in IEEE Spectrum, Jim Clarke, Intel’s Director of Quantum Hardware, speaks about Intel’s two different technological approaches to quantum computing hardware. Despite all of the hype and promises, quantum computing is still an immature technology, and the ultimate technological approach for practical systems is still to be determined.

One approach uses superconducting quantum bits, or qubits, designed to operate at temperatures as low as 0.01 K. This is similar to an approach being pursued by Google and D-Wave Systems, among others. A 49-qubit system (code-named Tangle Lake) has been packaged and tested, and is shown in the photograph as the small object with gold connectors.

The other approach is based on Si quantum dots, where the qubits are essentially single-electron transistors, and the information is encoded in the spin of the electron. These are compatible with CMOS processing, and full wafers of chips with up to 26 qubits (shown in the photograph) have been fabricated and tested. These chips still need cryogenic temperatures, but may operate at slightly warmer temperatures than the superconductor approach, up to about 1 K. They may also be more compatible with integrated semiconductor control circuitry.

Intel also has a free online simulator for small quantum systems.

Another recent article in Semiconductor Engineering provides an overview of quantum computing R&D, including contributions from IBM, Google, Microsoft, LETI, and D-Wave Systems, as well as Intel.

 

Roadmapping Cryogenic Electronics and Quantum Information Processing

The IEEE International Roadmap for Devices and Systems (IRDS™) has just released its 2017 roadmap which focuses primarily on extensions of conventional electronic technologies, but also covers newly developing technologies in its Chapter on Beyond CMOS (PDF, 3 MB).

One group of new technologies is forming its own International Focus Group, on Cryogenic Electronics and Quantum Information Processing. Committee members Scott Holmes and Erik DeBenedictis have summarized the case for roadmapping these technologies here (PDF, 378 KB).

These technologies include superconducting electronic circuits that require cryogenic temperatures less than 100 K, and often less than 10 K. In addition cryogenic semiconducting circuits have been developed for certain specialized applications. Both of these are generally mature technologies with integration on the intermediate scale, but not yet at the very large scale that would be needed for direct competition with room-temperature CMOS.

A distinct set of technologies are associated with the new field of quantum information processing, which may require temperatures less than 1 K for proper operation. Some of these are also based on superconducting devices, but operate in a regime of ultra-low power dissipation necessary for quantum operation.

For both superconducting and quantum circuits, the industry and the market are still small, but are likely to grow rapidly in the next 20 years. Roadmapping and standards are particularly important if these technologies are to achieve the ambitious goals that have been projected.

 

Artificial Intelligence and Machine Learning Applied to Cybersecurity: New IEEE Trend Paper Based on RC-Sponsored Confluence Summit

Cybersecurity is a critical issue in information technology throughout the world. IEEE has identified Artificial Intelligence and Machine Learning (AI/ML) as key technologies that will impact cybersecurity in both positive and negative ways. The IEEE Rebooting Computing Initiative, together with the IEEE Industry Engagement Committee, sponsored a Confluence Summit last October in Philadelphia, and 19 distinguished experts were charged with developing a Trend Paper on this topic. The Trend Paper has just been issued, and is available online here. Comments on the report are welcome.

The co-chairs of this Confluence Summit and Report were Dejan Milojicic and Barry Shoop. Dr Milojicic is a Distinguished Technologist at Hewlett Packard Labs, past president of the IEEE Computer Society, and chair of the IEEE Industry Engagement Committee. Dr. Shoop is a professor and head of the Department of Electrical Engineering and Computer Science at the U.S. Military Academy, West Point, and served as 2016 IEEE president.

In addition to addressing issues in Hardware, Software, and Data, the report also discusses legal issues, human factors, and implementation, in the context of industry, academia, government, standards bodies, and the general public. Key recommendations include the following:

  • The future needs of cybersecurity will require advances in technology, legal and human factors, and mathematically verified trust.
  • Coordinated business efforts will be required to establish market-accepted products, certified by established regulatory authorities
  • AI/ML-fueled cybersecurity must be based on standardized and audited operations.
  • Regulators will need to protect research and operations and establish internationally recognized cooperative organizations.
  • Data, models, and fault warehouses will be essential for tracking progress and documenting threats, defenses, and solutions.

A brief video of Dr. Milojicic discussing the Confluence Summit is also available here. IEEE Spectrum also has an article featuring this Trend Paper here.

 

Quantum Computers Strive to Break Out of the Lab

A new feature article in IEEE Spectrum reviews the past, present, and future of quantum computing, which has received much attention in the last year. The main conclusion is that while small quantum computing circuits made of fewer than 100 quantum bits (“qubits”) have been demonstrated, their practical near-term utility is severely limited, and this is likely to remain the case for at least the next few years. In the near future, these quantum systems may be used to model other small quantum systems, such as small clusters of atoms and molecules.

A key problem is that these quantum systems are extremely sensitive to thermal and electrical noise, and will require a very large overhead of quantum error correction circuits, which are themselves composed of noise-sensitive qubits. Furthermore, most experts in the field view the eventual larger quantum computing systems as special purpose accelerators to be used together with classical computers, rather than as general-purpose replacements for classical computers.

The article also presents some examples of current quantum computing circuits, using superconducting and coupled ion technologies. These are being developed by such computing giants as IBM, Google, Microsoft, and Intel, as well as smaller companies such as IonQ, Rigetti, and D-Wave, and university and government laboratory teams.

For further details, see the article in IEEE Spectrum here.

 

Transistor Options Beyond 3 nm

Fabrication of next-generation transistor devices is becoming more challenging, and several technologies are being explored to maintain improved performance at device nodes beyond 3 nm into the next decade and beyond. Some of these are variants of present advanced CMOS devices. These include, for example, gate-all-around (GAA) FETs, where the gate wraps entirely around nanowire Si channels. Other variations may include complementary FETs (CFETs) and negative capacitance FETs (NC-FETs). Some of these may incorporate novel materials, such as ferroelectric gates (such as hafnium oxide). This will also require innovations in lithography and interconnects. These technologies are projected for about 2025, but are unlikely to displace some of the larger nodes for many applications.

For further details, please see the article in Semiconductor Engineering here.

 

Beyond CMOS Computing: The Interconnect Challenge Workshop held in Annapolis, Maryland, Nov. 29, 2017

This workshop addressed the fact that data transfer between logic and memory has increasingly become the major bottleneck in computer speed and energy. Ways to deal with this problem include alternative architectures and computing paradigms (neuromorphic, approximate, 3D, analog, quantum) and alternative interconnect technologies (optical, superconducting, graphene). The agenda is available here, and the slide presentations for many of the talks are available.

The keynote address was given by Irene Qualters, the Director of the Division of Advanced Cyber-infrastructure at the US National Science Foundation. Her presentation is available here (PDF, 3 MB). She emphasizes that the interconnect challenge is at the heart of modern computing, with no single solution likely. A variety of complementary approaches in research and development are needed throughout the computing stack, requiring contributions and coordination among government, industry, and academia.

 

EDA Challenges Machine Learning

An article in Semiconductor Engineering describes the growing importance of Machine Learning (ML) in Electronic Design Automation (EDA). Optimization of circuit layout has long been automated for many cases, but other tasks often require extensive efforts by teams of experienced design engineers. Can machine learning develop the expertise of these engineers? Part of the difficulty is that the large database that is used for other machine learning tasks is typically not available for complex custom circuit design. An alternative may be iterative reinforcement learning, where the ML system and engineers work together to train the EDA system for accurate, efficient, and verifiable designs. Such improved automation tools will be necessary to accelerate development of the next generation of heterogeneous chips for computing systems and the internet of things.