Computing in the Cambrian Age
At the recent IEEE International Conference on Rebooting Computing (ICRC 2018), one of the keynote talks was by Dr. Paolo Faraboschi of Hewlett Packard Enterprise.
Dr. Faraboschi spoke about “Computing in the Cambrian Era.” This is a reference to a period in ancient earth history some 500 million years ago, when the growth in the diversity of life forms in the oceans suddenly exploded. By analogy, the end of Moore’s Law and the explosion of Big Data are leading to a proliferation of diverse computing technologies, optimized for different technological niches. These include GPUs, TPUs, analog, neuromorphic, optical, superconducting, and quantum computing. This is also creating a crisis in interconnections and communications at all levels, and in software development to manage it all. While not all proposed technologies will ultimately be successful, this is an exciting time for the computer industry. The video of Dr. Faraboschi’s talk is available here.
The Accelerator Age
At the recent DARPA ERI Summit, Bill Dally of NVIDIA spoke about machine learning applications of new GPU chips, enabling acceleration of performance based on computer architecture, rather than faster devices. See here for the video of his presentation here.
Slides for the talk are available here (PDF, 2 MB).
He provided an example whereby a speedup to 15,000 was enabled by co-design of algorithms and hardware (memory distribution) on a Voltage GPU chip. He also described how DARPA sponsorship and teaming with university researchers is promoting the further development of this technology.
Microsoft Quantum Development Kit
Watch the demonstration video from Microsoft.
Microsoft recently introduced a software platform for simulating quantum computers, the Microsoft Quantum Development Kit, based on their new programming language Q# (Q-Sharp). This was initially implemented on Windows, and now versions are available for Mac and Linux operating systems. It is now also compatible with the Python language.
This platform is able to simulate up to about 30 logical qubits on a typical laptop computer, and will help users develop and debug applications that can be tested later on quantum computing hardware.
This development kit is available as a free download from Microsoft here.
For an overview of developing applications, see here.
The Future of Computing: A Conversation with John Hennessy
Dr. John Hennessy is a leading computer scientist who was the President of Stanford University and is now the Chairman of Alphabet. In May 2018, Dr. Hennessy presented a keynote talk at the Google I/O developer conference. He spoke about the future of computing in the context of the ending of Moore’s Law and the onset of artificial intelligence. Energy efficiency has become critical, and is limiting the performance of advanced processors. He focused on the need for “domain-specific architectures” and “domain-specific languages” that will enable the design and programming of special-purpose processors optimized for different applications. This differs from the general-purpose processors of earlier generations, and will enable accelerated performance without faster transistors. This will require closer integration of hardware and software with applications throughout the design phase, preferably including consideration of cybersecurity. Artificial intelligence and machine learning (AI/ML) based on neural networks are finally making major impacts on system performance.
Current CMOS technology should be sufficient for the next decade, but other technologies (neuromorphic, quantum) may be needed for further progress.
For other videos from Google I/O 2018, see here
Low Power Image Recognition Challenge (LPIRC 2018)
LPIRC 2018 was held Salt Lake City, Utah on June 18, co-located with the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Prof. Yung-Hsiang Lu of Purdue University, the co-chair of LPIRC, presented an overview of the competition in 2018 and previous years, which was recorded by IEEE.tv, available here.
A press release from LPIRC 2018 is also available (PDF, 551 KB).
This is the 4th year of competition, and the performance of the winners continues to improve substantially, with greater speed, accuracy, and energy efficiency.
LPIRC 2018 was co-sponsored by Google and Facebook, and representatives from both companies presented talks on AI and machine learning, which are also available via IEEE.tv.
Check out the LPIRC Web portal for the upcoming announcement of LPIRC 2019, which is now being planned.
HPE Progress in Memory Driven Computing and AI
The HPE Discover Conference was held in Las Vegas, Nevada in June 2018. This video includes interviews with speakers Kirk Bresniker, HPE Chief Architect, and Beena Ammanath, HPE Global VP for AI.
Bresniker spoke about Memory-Driven Computing, a new generation of systems designing to deal with Big Data sets, and how this technology is being co-developed between HPE Labs and several potential customers. A “development sandbox” has been created in the cloud, so that customers may determine how to use this technology most effectively, with their own data sets.
Ammanath spoke about the present and future development of AI, and that it is important to distinguish between the reality and the hype. The present reality deals with what might be called narrow AI, where computer systems can learn a narrow subject area and can sometimes beat out human experts. Efforts to develop general AI, where computer systems try to duplicate general human intelligence in much broader areas, have been much less successful. Super AI, which could in principle out-compete any and all humans, should not really be a concern for the foreseeable future.
For the video interviews, see here.
IEEE Low-Power Image Recognition Challenge
LPIRC 2018, sponsored by IEEE Rebooting Computing, is being held June 18, 2018, in Salt Lake City, Utah, USA. For a video overview of LPIRC, see here.
This is the 4th in a series of annual open competitions, with prizes for the best combination of accuracy, speed, and low power for image recognition. Co-located with Computer Vision and Pattern Recognition Conference (CVPR 2018). For further details on LPIRC, see here.
LPIRC 2018 is being co-sponsored by Google and Facebook, which have helped to define dedicated tracks. LPIRC 2019 is in the planning stages. If you or your company would like to participate in the planning or sponsor competitive tracks next year, please contact Terence Martinez (firstname.lastname@example.org).
Prof. Thomas Sterling of Indiana University is the Chief Scientist and Associate Director of the Center for Research in Extreme Scale Technologies (CREST), and a longtime authority in high-performance computing. Dr. Sterling was interviewed in March 2018 at the Supercomputing Frontiers Europe Conference in Poland, where he was a keynote speaker.
In the interview, Dr. Sterling discusses the US National Strategic Computing Initiative, development of Exascale Computing in various countries, a new class of intelligent computing, and limitations of classic von Neumann architectures. The interview with Dr. Sterling is available here.
Dr. Yeric, ARM Fellow, was one of the invited speakers at the 2017 IEEE Industry Summit on the Future of Computing, held in Tysons Corner, VA, Nov. 10, 2017, as part of Rebooting Computing Week. Dr. Yeric focused on projections for the next 20-30 years of semiconductor device technology. For the next dozen years, traditional Moore’s Law scaling can continue with 3D stacking and integration of new memories with logic. However, looking further ahead, new materials and device technologies will be required, which can operate at much lower voltages and power levels. These may include 2D materials, such as graphene and molybdenum disulfide, and new low-power switches and wiring technologies. The key challenge is how to integrate these radical new technologies with silicon, so as to be available when they are needed before 2030. The semiconductor industry needs to identify these future technologies now, in order to develop the manufacturing techniques and circuit and system design tools for the future.
The talk by Dr. Yeric is available here.
Other talks from the 2017 Industry Summit and the co-located ICRC are available here.
Dr. Gargini is the Chairman of IRDS™, and formerly its predecessor organization ITRS. Prior to this, he was a senior executive at Intel. In his talk, he reviewed the history of semiconductor roadmaps, and how this led to IRDS™ being affiliated with IEEE Rebooting Computing as part of the Industry Connections Program of the IEEE Standards Society. According to Dr. Gargini, Moore’s Law is not really ending, but it is evolving toward 3D Power Scaling, which will enable continued exponential improvement for the foreseeable future.
The talk by Dr. Gargini is available here.
Dr. Gargini spoke at the IEEE Industry Summit on the Future of Computing, held on Nov. 10, 2017 as part of Rebooting Computing week, and included invited talks from a wide range of leaders of industry, academia, and government. See here for the agenda.
Other talks from the Industry Summit and the co-located ICRC 2017 are available here.
“Removing the Handcuffs: Computing at the End of Moore’s Scaling,” by Dr. R. Stanley Williams of HPE.
The IEEE Industry Summit on the Future of Computing was held on Nov. 10, 2017 as part of Rebooting Computing week, and included invited talks from a wide range of leaders of industry, academia, and government. See here for the agenda.
Dr. Williams is Senior Fellow at HPE and Director of the Information and Quantum Systems Lab. He argues that the end of Moore’s Law scaling provides room for creative solutions that go beyond the traditional von Neumann processor. These include brain-inspired architectures, memristors, analog accelerators, and nonlinear dynamics. For example, a memristor array known as the “Dot-Product Engine” (DPE) provides much faster analog processing of Boolean algebra with much lower power than conventional digital systems. The talk by Dr. Williams is available here.
Other talks from the Industry Summit and the co-located ICRC 2017 are available here.
Mr. Ganesh is a Ph.D. student at the University of Massachusetts at Amherst. The paper addresses a new type of Thermodynamic Computer, which may be biologically inspired. His presentation was awarded a “Best Student Paper” award at ICRC. For an overview of his talk, see here.
For the video of the talk by Mr. Ganesh, see here.
For the conference publication of the paper in IEEE Xplore, see here.
For an overview of some other talks at ICRC 2017, see this article in IEEE Spectrum.
Other talks from ICRC 2017 are available here.
Other papers from ICRC 2017 are available in IEEE Xplore.