GVdoc: Graph-based visual document classification

DOI

10.18653/v1/2023.findings-acl.329

Abstract

The robustness of a model for real-world deployment is decided by how well it performs on unseen data and distinguishes between in-domain and out-of-domain samples. Visual document classifiers have shown impressive performance on in-distribution test sets. However, they tend to have a hard time correctly classifying and differentiating out-of-distribution examples. Image-based classifiers lack the text component, whereas multi-modality transformer-based models face the token serialization problem in visual documents due to their diverse layouts. They also require a lot of computing power during inference, making them impractical for many real-world applications. We propose, GVdoc, a graph-based document classification model that addresses both of these challenges. Our approach generates a document graph based on its layout, and then trains a graph neural network to learn node and graph embeddings. Through experiments, we show that our model, even with fewer parameters, outperforms state-of-the-art models on out-of-distribution data while retaining comparable performance on the in-distribution test set.

Document Type

Article

Publication Date

2023

Publisher Statement

ACL materials are Copyright © 1963–2024 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the

Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License

. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a

Creative Commons Attribution 4.0 International License

.

Share

COinS