Deep learning processor


A deep learning processor, or a deep learning accelerator, is a specially designed circuitry optimized for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors form a part of a wide range of today's commercial infrastructure, from mobile devices to cloud servers.
The goal of DLPs is to provide higher efficiency and performance than existing processing devices, i.e., general CPUs and GPUs, when processing deep learning algorithms. Just as GPUs for graphic processing, DLPs leverage the domain-specific knowledge in designing of architectures for deep learning processing. Commonly, most DLPs leverage a large number of computing components to leverage the high data-level parallelism, a relative larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators to leverage the error-resilience of deep learning.

History

The use of CPUs/GPUs

At the very beginning, the general CPUs are adopted to perform deep learning algorithms. Later, GPUs are introduced to the domain of deep
learning. For example, in 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet, which won the champion of the ISLVRC-2012 competition. As the interests in deep learning algorithms and DLPs keep increasing, GPU manufactures start to add deep learning related features in both hardware and software. For example, Nvidia even released the Turing Tensor Core—a DLP—to accelerate deep learning processing.

The first DLP

To provide higher efficiency in performance and energy, domain-specific
design starts to draw a great attention. In 2014, Chen et al. proposed the first DLP in the world, DianNao, to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors are proposed by the same group, forming the DianNao Family

The blooming DLPs

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss, EIE, Minerva, Stripes in academia, and TPU, MLU in industry. We listed several representative works in Table 1.

DLP architecture

With the rapid evolving of deep learning algorithms and DLPs, many architectures have been explored. Roughly, DLPs can be classified into three categories based on their implementation: digital circuits, analog circuits, and hybrid circuits. As the pure analog DLPs are rarely seen, we introduce the digital DLPs and hybrid DLPs.

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.
Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based organization, either with vector MACs or scalar MACs. Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations.Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically. Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms.Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue. Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing. Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM, phase change memory, etc.

GPUs and FPGAs

Despite the DLPs, GPUs and FPGAs are also been used as accelerators to speedup the execution of deep learning algorithms. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory, contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms. Microsoft builds its deep learning platform using tons of FPGAs in its Azure to support real-time deep learning services. In Table 2 we compare the DLPs against GPUs and FPGAs in terms of target, performance, energy efficiency, and flexibility.
TargetPerformanceEnergy EfficiencyFlexibility
DLPsdeep learninghighhighdomain-specific
FPGAsalllowmoderategeneral
GPUsmatrix computationmoderatelowmatrix applications

Benchmarks

Benchmarking has served long as the foundation of designing new hardware architectures, where both architects and practitioners can compare various architectures, identify their bottlenecks, and conduct corresponding system/architectural optimization. Table 3 lists several typical benchmarks for DLPs, dating from the year of 2012 in time order.
YearNN BenchmarkAffiliations# of micro benchmarks# of component benchmarks# of application benchmarks
2012BenchNNICT, CASN/A12N/A
2016FathomHarvardN/A8N/A
2017BenchIPICT, CAS1211N/A
2017Stanford8N/AN/A
2017Baidu4N/AN/A
2018Harvard, Intel, and Google, etc.N/A7N/A
2019ICT, CAS and Alibaba, etc.12162
2019NNBench-XUCSBN/A10N/A