论文标题
失真感知的网络修剪和功能重用用于实时视频细分
Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
论文作者
论文摘要
实时视频细分是许多现实世界应用(例如自动驾驶和机器人控制)的关键任务。由于最新的语义细分模型尽管性能令人印象深刻,但对于实时应用来说通常太重了,研究人员提出了具有速度准确性权衡的轻量级体系结构,以降低准确性为代价实现实时速度。在本文中,我们提出了一个新颖的框架,通过利用视频中的时间位置来加快使用跳过连接进行实时视觉任务的架构。具体而言,在每个帧的到来时,我们将特征从上一个帧转换为在特定的空间箱中重复使用它们。然后,我们对当前帧区域的骨干网络进行部分计算,以捕获当前和上一个帧之间的时间差异。这是通过使用门控机制动态掉出残留块来完成的,该机制决定哪些基于框架间失真掉落。我们在具有多个骨干网络的视频语义分割基准上验证了我们的时空掩码发生器(STMG),并表明我们的方法在很大程度上可以随着准确性的最小损失而加快推断。
Real-time video segmentation is a crucial task for many real-world applications such as autonomous driving and robot control. Since state-of-the-art semantic segmentation models are often too heavy for real-time applications despite their impressive performance, researchers have proposed lightweight architectures with speed-accuracy trade-offs, achieving real-time speed at the expense of reduced accuracy. In this paper, we propose a novel framework to speed up any architecture with skip-connections for real-time vision tasks by exploiting the temporal locality in videos. Specifically, at the arrival of each frame, we transform the features from the previous frame to reuse them at specific spatial bins. We then perform partial computation of the backbone network on the regions of the current frame that captures temporal differences between the current and previous frame. This is done by dynamically dropping out residual blocks using a gating mechanism which decides which blocks to drop based on inter-frame distortion. We validate our Spatial-Temporal Mask Generator (STMG) on video semantic segmentation benchmarks with multiple backbone networks, and show that our method largely speeds up inference with minimal loss of accuracy.