论文标题
可穿戴视频中对象识别的性能
Performance of object recognition in wearable videos
论文作者
论文摘要
可穿戴技术正在实现从生活记录到健康援助的计算机视觉的许多新应用。他们中的许多人必须认识到相机捕获的场景中感兴趣的元素。这项工作研究了这种相机捕获的视频中对象检测和本地化的问题。与标准图像甚至其他类型的视频相比,可穿戴视频对于对象检测的方案更具挑战性,这是由于质量较低的图像(例如焦点较差)或可穿戴录音中常见的高混乱和遮挡。现有的工作通常着重于检测焦点对象或戴着相机的用户操纵的对象。我们对此类视频中的对象检测任务进行了更一般的评估,因为许多应用程序(例如营销研究)也需要检测用户不关注的对象。这项工作对众所周知的Yolo架构进行了彻底的研究,该研究为可穿戴视频中对象检测的特定情况提供了精确的折衷。我们将研究重点放在公共ADL数据集上,但我们还使用其他公共数据进行互补评估。我们进行了一套详尽的实验,这些实验具有不同的原始体系结构及其培训策略的不同。我们的实验得出了一些关于我们目标最有希望的方向的结论,并指出我们进一步研究步骤,以改善可穿戴视频中的检测。
Wearable technologies are enabling plenty of new applications of computer vision, from life logging to health assistance. Many of them are required to recognize the elements of interest in the scene captured by the camera. This work studies the problem of object detection and localization on videos captured by this type of camera. Wearable videos are a much more challenging scenario for object detection than standard images or even another type of videos, due to lower quality images (e.g. poor focus) or high clutter and occlusion common in wearable recordings. Existing work typically focuses on detecting the objects of focus or those being manipulated by the user wearing the camera. We perform a more general evaluation of the task of object detection in this type of video, because numerous applications, such as marketing studies, also need detecting objects which are not in focus by the user. This work presents a thorough study of the well known YOLO architecture, that offers an excellent trade-off between accuracy and speed, for the particular case of object detection in wearable video. We focus our study on the public ADL Dataset, but we also use additional public data for complementary evaluations. We run an exhaustive set of experiments with different variations of the original architecture and its training strategy. Our experiments drive to several conclusions about the most promising directions for our goal and point us to further research steps to improve detection in wearable videos.