论文标题

DropTrack-使用深度学习用于微流体应用程序的自动液滴跟踪

DropTrack -- automatic droplet tracking using deep learning for microfluidic applications

论文作者

Durve, Mihir, Tiribocchi, Adriano, Bonaccorso, Fabio, Montessori, Andrea, Lauricella, Marco, Bogdan, Michal, Guzowski, Jan, Succi, Sauro

论文摘要

深度神经网络正在迅速作为数据分析工具出现,通常比复杂的微流体系统中使用的常规技术表现出色。微流体实验经常需要的一种基本分析是计数和跟踪液滴。具体而言,密集乳液中的液滴跟踪是具有挑战性的,因为液滴以紧密的配置移动。有时,即使对于人类观察者来说,这些密集的簇中的单个液滴也很难解决。在这里,将两种基于深度学习的尖端算法用于对象检测(YOLO)和对象跟踪(DeepSort),将其合并为单个图像分析工具Droptrack,以跟踪微流体实验中的液滴。 DropTrack分析输入视频,提取液滴的轨迹并渗透其他感兴趣的可观察物,例如液滴编号。通过手动注释的图像训练对象探测器网络以识别液滴识别,这是一项劳动密集型的任务,并且是持续的瓶颈。这项工作通过培训对象检测器网络(YOLOV5)和包含真实和合成图像的混合数据集来部分解决此问题。我们对双重乳液实验进行了分析,作为案例研究,以衡量Droptrack的性能。对于我们的测试案例,接受60%合成图像训练的Yolo网络在液滴计数中的性能与使用100%真实图像训练的液滴计数相似,同时将图像注释的工作减少了60%。 Droptrack的性能是根据平均平均精度(MAP),计数液滴和推理速度的均值误差来衡量的。 Droptrack的最快配置以每秒30帧的速度运行,远离实时图像分析的标准。

Deep neural networks are rapidly emerging as data analysis tools, often outperforming the conventional techniques used in complex microfluidic systems. One fundamental analysis frequently desired in microfluidic experiments is counting and tracking the droplets. Specifically, droplet tracking in dense emulsions is challenging as droplets move in tightly packed configurations. Sometimes the individual droplets in these dense clusters are hard to resolve, even for a human observer. Here, two deep learning-based cutting-edge algorithms for object detection (YOLO) and object tracking (DeepSORT) are combined into a single image analysis tool, DropTrack, to track droplets in microfluidic experiments. DropTrack analyzes input videos, extracts droplets' trajectories, and infers other observables of interest, such as droplet numbers. Training an object detector network for droplet recognition with manually annotated images is a labor-intensive task and a persistent bottleneck. This work partly resolves this problem by training object detector networks (YOLOv5) with hybrid datasets containing real and synthetic images. We present an analysis of a double emulsion experiment as a case study to measure DropTrack's performance. For our test case, the YOLO networks trained with 60% synthetic images show similar performance in droplet counting as with the one trained using 100% real images, meanwhile saving the image annotation work by 60%. DropTrack's performance is measured in terms of mean average precision (mAP), mean square error in counting the droplets, and inference speed. The fastest configuration of DropTrack runs inference at about 30 frames per second, well within the standards for real-time image analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源