论文标题

多视图信息瓶颈无各种近似

Multi-view Information Bottleneck Without Variational Approximation

论文作者

Zhang, Qi, Yu, Shujian, Xin, Jingmin, Chen, Badong

论文摘要

通过“智能”融合不同视图的互补信息,多视图学习能够提高分类任务的性能。在这项工作中,我们将信息瓶颈原理扩展到监督的多视图学习方案,并使用最近提出的基于基于矩阵的R {é} NYI的$α$级熵功能,以直接优化所得的目标,而无需进行变分近似或对抗性训练。合成数据集和现实世界数据集的经验结果表明,我们的方法在每种视图中都具有改善对噪声和冗余信息的鲁棒性,尤其是在训练样本有限的情况下。代码可在〜\ url {https://github.com/archy666/meib}中获得。

By "intelligently" fusing the complementary information across different views, multi-view learning is able to improve the performance of classification tasks. In this work, we extend the information bottleneck principle to a supervised multi-view learning scenario and use the recently proposed matrix-based R{é}nyi's $α$-order entropy functional to optimize the resulting objective directly, without the necessity of variational approximation or adversarial training. Empirical results in both synthetic and real-world datasets suggest that our method enjoys improved robustness to noise and redundant information in each view, especially given limited training samples. Code is available at~\url{https://github.com/archy666/MEIB}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源