论文标题
生物医学实体表示的自我对准预处理
Self-Alignment Pretraining for Biomedical Entity Representations
论文作者
论文摘要
尽管通过掩盖语言模型(MLM)进行了自我监督的学习取得了广泛的成功,但准确地捕获生物医学领域中细粒度的语义关系仍然是一个挑战。对于实体级任务,例如链接实体关系(尤其是同义词)的能力是关键的,这至关重要。为了应对这一挑战,我们提出了萨普伯特(Sapbert),这是一种自我对准生物医学实体的代表空间。我们设计了一个可扩展的度量学习框架,该框架可以利用UMLS,这是具有4M+概念的大量生物医学本体集合。与以前的基于管道的混合动力系统相反,Sapbert为医疗实体链接(MEL)的问题提供了优雅的单模型解决方案(MEL),从而在六个MEL基准测试数据集上实现了新的最先进(SOTA)。在科学领域,即使没有特定于任务的监督,我们也可以实现SOTA。在各种特定领域的MLMS(例如Biobert,Scibertand和PubMedbert)中,我们的训练方案既有效又健壮。
Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.