论文标题
Dynamap:低延迟CNN推理的动态算法映射框架
DYNAMAP: Dynamic Algorithm Mapping Framework for Low Latency CNN Inference
论文作者
论文摘要
关于卷积神经网络(CNN)FPGA加速度的大多数现有工作都专注于在所有层中采用单个策略(算法,数据流等)。这种方法无法在复杂和深度CNN上获得最佳的延迟。新兴的CNN具有各种每层计算特征,包括并行性,算术强度,局部性和记忆足迹。每层策略选择和细粒度调整需要达到低端到端的潜伏期。但是,专门针对每一层的专门硬件模块限制了每层利用率,并对端到端延迟产生不利影响。在本文中,我们通过算法 - 架构协会框架框架,Dynamap,由(1)一个统一的硬件覆盖层组成,可以在各个层上重复使用,并支持所有三个流行的卷积算法的动态映射,并进一步允许灵活的数据流切换,以最大程度地限制每个层的硬件; (2)一种新颖的软件设计空间探索(DSE)流,该流量可以自定义硬件覆盖并选择最佳策略映射。我们表明,算法映射空间通过网络深度指数增长,尽管最佳算法选择问题通常是通过利用CNN模型的串联 - 平行结构的NP-HARD,但我们演示了用于最佳算法映射的多项式时间解决方案。 Dynamap是针对任何CNN进行了优化的,包括那些在整个层中具有多种计算和内存需求的人。我们使用两个最先进的CNN-googlenet和Inception-V4演示了Dynamap。与最先进的FPGA实现相比,生成的加速器分别达到了$ 2.8 \ times $和$ 1.4 \ times $ speedups。
Most of the existing work on FPGA acceleration of Convolutional Neural Network (CNN) focus on employing a single strategy (algorithm, dataflow, etc.) across all the layers. Such an approach does not achieve optimal latency on complex and deep CNNs. Emerging CNNs have diverse per-layer computation characteristics including parallelism, arithmetic intensity, locality, and memory footprint. Per-layer strategy selection and fine-grained tuning are required to achieve low end-to-end latency. However, specialized hardware modules dedicated to each layer limit the per-layer utilization and adversely affect end-to-end latency. In this paper, we address these problems by an algorithm-architecture co-optimization framework, DYNAMAP, consisting of (1) a unified hardware overlay that can be reused across layers, supporting dynamic mapping of all three families of popular convolution algorithms, and further allowing flexible dataflow switching to maximize hardware utilization for each layer; (2) a novel software Design Space Exploration (DSE) flow that customizes the hardware overlay and chooses optimal strategy mapping. We show that the algorithm mapping space increases exponentially with network depth, and while the optimal algorithm selection problem is NP-hard in general, by exploiting the series-parallel structure of CNN models, we demonstrate a polynomial-time solution for optimal algorithm mapping. DYNAMAP is optimized for any CNN, including those having diverse computation and memory requirements across the layers. We demonstrate DYNAMAP using two state-of-the-art CNNs - GoogleNet and Inception-V4. The generated accelerators achieve up to $2.8\times$ and $1.4\times$ speedups, respectively, wrt inference latency compared with the state-of-the-art FPGA implementations.