What's New

Feature Article

Feature Article

The Edge-to-Cloud Continuum

Virtual Roundtable with Experts from Industry and Academia

In the November issue of IEEE Computer, Dejan Milojicic of Hewlett Packard Labs interviewed several experts on computer system architectures (visit IEEE Xplore for the interview), on the subject of the future of edge computing, cloud computing, and how they will work together.

The panelists included Tom Bradicich of HP, Adam Drobot of OpenTechWorks, and Ada Gavrilovska of Georgia Tech.

Cloud computing refers to computing in large-scale data centers, while edge computing takes place at least partly in cell phones, laptops, desktops, and the Internet-of-Things.

While Cloud computing is often more computationally efficient, issues of latency and bandwidth generally require some data processing at the edge. In many cases, these are mobile devices, so that wireless protocols of 5G and beyond are essential. There are also important issues of security, privacy, and reliability at both levels and in the communication between the two. In most cases, there will be a variety of tradeoffs, depending on the type of application and on business considerations.

These issues are likely to continue to generate a dynamic computing environment for the foreseeable future.

Technology Spotlight

Technology Spotlight

Types of Deep Learning Hardware
Interview with Bradley Geden of Synopsys

Artificial intelligence based on “deep learning” is rapidly being implemented in a wide variety of edge systems, mostly using commercial chips that are one of several types. Ed Sperling, Editor of Semiconductor Engineering, interviewed Bradley Geden of Synopsys about these different types, and the various applications for each type.

These include systolic arrays, 2D course-grained reconfigurable arrays, and parallel pipelines. In each case, there is a 2D neural network array of artificial neurons, with matrix operations (multiply-accumulates or MACs) on the values in the neurons. These are all digital computational arrays, in contrast to analog arrays of memristors that are under development elsewhere.

Systolic arrays are hard-wired for MAC operations, and are used for basic image recognition neural networks. The reconfigurable arrays are more like FPGAs, with greater flexibility for a wider range of algorithms, but are more complex to program. Parallel pipelines are optimized for high-speed throughput, involving complex calculations on real-time data.

Mr. Geden also emphasized design synthesis approaches for programming these chips. Given the repetitive nature of these structures, a hierarchical approach may often be more flexible and easier to alter than a “flat” approach. Further information from Synopsys on design tools for AI systems is available at the Synopsys website.

For further details, watch the video.