Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning.

DOI

10.18653/v1/2021.fever-1.11

Abstract

Automatic fact-checking is crucial for recognizing misinformation spreading on the internet. Most existing fact-checkers break down the process into several subtasks, one of which determines candidate evidence sentences that can potentially support or refute the claim to be verified; typically, evidence sentences with gold-standard labels are needed for this. In a more realistic setting, however, such sentence-level annotations are not available. In this paper, we tackle the natural language inference (NLI) subtask—given a document and a (sentence) claim, determine whether the document supports or refutes the claim—only using document-level annotations. Using fine-tuned BERT and multiple instance learning, we achieve 81.9% accuracy, significantly outperforming the existing results on the WikiFactCheck-English dataset.

Document Type

Article

Publication Date

2021

Publisher Statement

Copyright © 2021, Association for Computational Linguistics.

Share

COinS