论文标题
TVNet:行动本地化的时间投票网络
TVNet: Temporal Voting Network for Action Localization
论文作者
论文摘要
我们提出了一个时间投票网络(TVNET),以在未修剪视频中进行动作本地化。这结合了一个新颖的投票证据模块,以更准确地定位时间边界,其中积累了时间上下文证据,以预测开始和最终动作边界的框架级级别的概率。我们与行动无关的证据模块被纳入管道中,以计算置信分数和行动类别。我们在ActivityNet-1.3上获得了平均地图为34.6%,尤其是优于IOU最高为0.95的先前方法。当与PGCN结合使用时,TVNet的地图也达到了56.0%,而在Thumos14上为0.5 iou的Muses和59.1%的地图也达到了59.1%,并且在所有阈值下都优于先前工作。我们的代码可在https://github.com/hanielwang/tvnet上找到。
We propose a Temporal Voting Network (TVNet) for action localization in untrimmed videos. This incorporates a novel Voting Evidence Module to locate temporal boundaries, more accurately, where temporal contextual evidence is accumulated to predict frame-level probabilities of start and end action boundaries. Our action-independent evidence module is incorporated within a pipeline to calculate confidence scores and action classes. We achieve an average mAP of 34.6% on ActivityNet-1.3, particularly outperforming previous methods with the highest IoU of 0.95. TVNet also achieves mAP of 56.0% when combined with PGCN and 59.1% with MUSES at 0.5 IoU on THUMOS14 and outperforms prior work at all thresholds. Our code is available at https://github.com/hanielwang/TVNet.