论文标题
开放式2020:睁大眼睛数据集
OpenEDS2020: Open Eyes Dataset
论文作者
论文摘要
我们介绍了开放数据集的第二版,即打开的数据集,这是一个新颖的数据集,该数据集是在受控照明下以100 Hz捕获的帧速率,使用虚拟现实的头部安装式显示器,该显示器安装在两个同步的垂直眼镜摄像头。 The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; 2)眼睛分割数据集,由200 Hz采样的200个序列组成,最多29,500张图像,其中5%包含语义分割标签,该标签被设计为鼓励使用时间信息来传播标签到连续的框架。基线实验已在开放式2020上进行了评估,每个任务都为一个,在对未来的1到5帧进行凝视预测时,平均角度误差为5.37度,对于语义段的联合得分为84.1%,平均值相交的平均值。作为其前身打开数据集,我们预计该新数据集将继续为研究人员在眼睛跟踪,机器学习和计算机视觉社区中创造机会,以推动虚拟现实应用程序的最新技术。该数据集可根据要求在http://research.fb.com/programs/opends-2020-challenge/上下载。
We present the second edition of OpenEDS dataset, OpenEDS2020, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display mounted with two synchronized eye-facing cameras. The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal information to propagate labels to contiguous frames. Baseline experiments have been evaluated on OpenEDS2020, one for each task, with average angular error of 5.37 degrees when performing gaze prediction on 1 to 5 frames into the future, and a mean intersection over union score of 84.1% for semantic segmentation. As its predecessor, OpenEDS dataset, we anticipate that this new dataset will continue creating opportunities to researchers in eye tracking, machine learning and computer vision communities, to advance the state of the art for virtual reality applications. The dataset is available for download upon request at http://research.fb.com/programs/openeds-2020-challenge/.