Download PDFOpen PDF in browserThe Mask One at a Time Framework for Detecting the Relationship Between Financial EntitiesEasyChair Preprint 104634 pages•Date: June 28, 2023AbstractIn the financial domain, understanding the relationship between two entities helps in understanding financial texts. In this paper, we introduce the Mask One At a Time (MOAT) framework for detecting the relationship between financial entities. Subsequently, we benchmark its performance with the existing state-of-the-art discriminative and generative Large Language Models (LLMs). We use the SEC-BERT embeddings along with the one-hot encoded vectors of the types of entities and their relation group as features. We benchmark MOAT with three such open-source LLMs, namely, Falcon, Dolly and MPT under zero-shot and few shot settings. The results prove that MOAT outperforms these LLMs. Keyphrases: Financial texts, Relation Extraction, large language models
|