论文标题
超越相机:世界坐标的神经网络
Beyond the Camera: Neural Networks in World Coordinates
论文作者
论文摘要
眼睛运动和视野将视野放置在视网膜上,使动物增加了场景的分辨率,并抑制了分散注意力的信息。通过深层网络的视频理解缺少了这个基本系统,通常限于锁定在相机框架上的224 x 224像素内容。我们提出了一个简单的想法,世界风格,其中每个层的每个功能都有空间变换,并且仅根据需要进行特征映射。我们表明,即使是在预录的视频上,也可以使用这些世界风格构建的网络,可用于建模眼动动作,例如扫视,固定和平滑追击。也就是说,网络可以使用全部224 x 224像素来查看一个小细节,然后整个场景。我们表明,典型的构建块,例如卷积和合并,可以使用可用的工具来支持世界优势。在Charades,Olympic Sports和Caltech-UCSD Birds-200-2011数据集上进行了实验,探索动作识别,细粒度的识别和视频稳定。
Eye movement and strategic placement of the visual field onto the retina, gives animals increased resolution of the scene and suppresses distracting information. This fundamental system has been missing from video understanding with deep networks, typically limited to 224 by 224 pixel content locked to the camera frame. We propose a simple idea, WorldFeatures, where each feature at every layer has a spatial transformation, and the feature map is only transformed as needed. We show that a network built with these WorldFeatures, can be used to model eye movements, such as saccades, fixation, and smooth pursuit, even in a batch setting on pre-recorded video. That is, the network can for example use all 224 by 224 pixels to look at a small detail one moment, and the whole scene the next. We show that typical building blocks, such as convolutions and pooling, can be adapted to support WorldFeatures using available tools. Experiments are presented on the Charades, Olympic Sports, and Caltech-UCSD Birds-200-2011 datasets, exploring action recognition, fine-grained recognition, and video stabilization.