Date of Award


Document Type


Degree Name

Bachelor of Science



First Advisor

Dr. Joonsuk Park


Argument mining is the automatic identification and extraction of structure from argumentative language. Previous works have constrained the argument structure to forming strictly trees in order to utilize efficient tree-specific techniques. However, arguments in the wild are unlikely to exhibit this constrained structure. Given the recent trend of fine-tuning large pre-trained models to reach state of the art performance on a variety of Natural Language Processing (NLP) tasks, we look to leverage the power of these deep contextualized word embeddings towards the task of non-tree argument mining. In this paper, we introduce a new pipeline which utilizes pre-trained BERT based models as well as Proposition Level Bi-affine Attention and Weighted Cross Entropy Loss for predicting arguments where the structure forms a directed acyclic graph. Our experiments demonstrate the efficacy of using deep contextualized word embedding from BERT based models while also suggesting future directions involving recurrence for modelling hierarchical relationships.