论文标题

形成用于分类和搜索组织病理学图像的投影的本地交叉点

Forming Local Intersections of Projections for Classifying and Searching Histopathology Images

论文作者

Sriram, Aditya, Kalra, Shivam, Babaie, Morteza, Kieffer, Brady, Drobi, Waddah Al, Rahnamayan, Shahryar, Kashani, Hany, Tizhoosh, Hamid R.

论文摘要

在本文中,我们提出了一个新的图像描述符,称为“局部投影局部交叉点”(FLIP)及其多分辨率版本(MFLIP),用于表示组织病理学图像。描述符基于ra transform,其中我们将平行投影应用于灰度图像的小社区。使用每个窗口中的等距投影方向,我们通过采取相邻投影的相交来提取邻里的独特和不变特征。此后,我们为每个图像构建一个直方图,我们称之为翻转直方图。各种分辨率提供不同的翻转直方图,然后将其连接以形成MFLIP描述符。我们的实验包括从头开始的培训通用网络和对预训练的网络,以对我们提出的描述符进行基准测试。实验是在可公开的数据集基米亚路径24和基米亚路径上进行的。960。对于这两个数据集,FLIP和MFLIP描述符在所有实验中均显示出令人鼓舞的结果。使用Kimia Path24数据,Flip的表现优于未调整的Inception-V3,而微调VGG16和MFLIP在功能提取中超过了细调的inpection-v3。

In this paper, we propose a novel image descriptor called Forming Local Intersections of Projections (FLIP) and its multi-resolution version (mFLIP) for representing histopathology images. The descriptor is based on the Radon transform wherein we apply parallel projections in small local neighborhoods of gray-level images. Using equidistant projection directions in each window, we extract unique and invariant characteristics of the neighborhood by taking the intersection of adjacent projections. Thereafter, we construct a histogram for each image, which we call the FLIP histogram. Various resolutions provide different FLIP histograms which are then concatenated to form the mFLIP descriptor. Our experiments included training common networks from scratch and fine-tuning pre-trained networks to benchmark our proposed descriptor. Experiments are conducted on the publicly available dataset KIMIA Path24 and KIMIA Path960. For both of these datasets, FLIP and mFLIP descriptors show promising results in all experiments.Using KIMIA Path24 data, FLIP outperformed non-fine-tuned Inception-v3 and fine-tuned VGG16 and mFLIP outperformed fine-tuned Inception-v3 in feature extracting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源