论文标题
Visem-tracking,人类精子跟踪数据集
VISEM-Tracking, a human spermatozoa tracking dataset
论文作者
论文摘要
精子运动的手动评估需要显微镜观察,这是由于视野中快速移动的精子而挑战。为了获得正确的结果,手动评估需要广泛的培训。因此,计算机辅助的精子分析(CASA)已越来越多地用于诊所。尽管如此,仍需要更多数据来培训监督的机器学习方法,以提高精子运动和运动学评估的准确性和可靠性。在这方面,我们提供了一个称为Visem-Tracking的数据集,其中20张录像带为30秒(包括29,196帧)的湿精子制剂,并具有手动注释的边界盒坐标和一组由专家分析的域中专家分析的精子特征。除了带注释的数据之外,我们还提供未标记的视频剪辑,以易于使用和通过自我监督学习等方法访问和分析数据。作为本文的一部分,我们使用在Visem-Tracking数据集中训练的Yolov5深学习(DL)模型介绍了基线精子检测性能。结果,我们表明数据集可用于训练复杂的DL模型以分析精子。
A manual assessment of sperm motility requires microscopy observation, which is challenging due to the fast-moving spermatozoa in the field of view. To obtain correct results, manual evaluation requires extensive training. Therefore, computer-assisted sperm analysis (CASA) has become increasingly used in clinics. Despite this, more data is needed to train supervised machine learning approaches in order to improve accuracy and reliability in the assessment of sperm motility and kinematics. In this regard, we provide a dataset called VISEM-Tracking with 20 video recordings of 30 seconds (comprising 29,196 frames) of wet sperm preparations with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. In addition to the annotated data, we provide unlabeled video clips for easy-to-use access and analysis of the data via methods such as self- or unsupervised learning. As part of this paper, we present baseline sperm detection performances using the YOLOv5 deep learning (DL) model trained on the VISEM-Tracking dataset. As a result, we show that the dataset can be used to train complex DL models to analyze spermatozoa.