论文标题

使用预训练的变压器处理长期的法律文件:修改法律和朗格尔特尔

Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer

论文作者

Mamakas, Dimitris, Tsotsi, Petros, Androutsopoulos, Ion, Chalkidis, Ilias

论文摘要

预训练的变压器当前主导着大多数NLP任务。但是,他们限制了最大输入长度(BERT中的512个子字),这些长度在法律领域中过于限制。即使是稀疏的注意力模型,例如longformer和bigbird,它们的最大输入长度将最大长度提高到4,096个子字,在Lexglue的六个数据集中的三个数据集中,文本严重截断。具有TF-IDF功能的更简单的线性分类器可以处理任何长度的文本,所需的训练和部署资源要少得多,但通常比预训练的变压器表现优于。我们探索两个方向以应对长期的法律文本:(i)修改从法律委员会的备受启动的长形式,以处理更长的文本(最多8,192个子字),以及(ii)修改Legalbert使用TF-IDF表示。在性能方面,第一种方法是最好的方法,它超过了法律委员会的等级版本,这是Lexglue的先前最新技术。第二种方法以较低的性能为代价,导致计算更有效的模型,但是在长期法律文档分类中,所得模型的总体效果仍然超过了带有TF-IDF功能的线性SVM。

Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源