Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning.
DOI
10.18653/v1/2021.fever-1.11
Abstract
Automatic fact-checking is crucial for recognizing misinformation spreading on the internet. Most existing fact-checkers break down the process into several subtasks, one of which determines candidate evidence sentences that can potentially support or refute the claim to be verified; typically, evidence sentences with gold-standard labels are needed for this. In a more realistic setting, however, such sentence-level annotations are not available. In this paper, we tackle the natural language inference (NLI) subtask—given a document and a (sentence) claim, determine whether the document supports or refutes the claim—only using document-level annotations. Using fine-tuned BERT and multiple instance learning, we achieve 81.9% accuracy, significantly outperforming the existing results on the WikiFactCheck-English dataset.
Document Type
Article
Publication Date
2021
Publisher Statement
Copyright © 2021, Association for Computational Linguistics.
Recommended Citation
Sathe, Aalok, and Joonsuk Park. “Automatic Fact-Checking with Document-Level Annotations Using BERT and Multiple Instance Learning.” In Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER), 101–107. Dominican Republic: Association for Computational Linguistics, 2021. https://aclanthology.org/2021.fever-1.11.