Download PDFOpen PDF in browser

Investigating Causal Reasoning in Emerging LLM Architectures

EasyChair Preprint 15143

18 pagesDate: September 28, 2024

Abstract

The rapid advancement of Large Language Models (LLMs) has transformed the landscape of artificial intelligence, enabling unprecedented capabilities in natural language processing. However, the incorporation of causal reasoning into these models remains a critical challenge. This study investigates how emerging LLM architectures handle causal reasoning, assessing their performance in tasks that require causal inference and analysis. Through a comparative evaluation of selected LLMs, we explore their strengths and weaknesses in identifying causal relationships, utilizing experimental frameworks designed to simulate real-world causal reasoning scenarios. Our findings reveal significant variations in causal reasoning capabilities across different architectures, highlighting common errors and limitations. Additionally, we propose strategies to enhance causal reasoning in LLMs, including the integration of external knowledge bases and the implementation of innovative training techniques. The implications of this research extend beyond technical enhancements, raising important ethical considerations regarding bias, transparency, and societal impact. This investigation contributes to a deeper understanding of how LLMs can be improved to support more accurate decision-making and reasoning, ultimately paving the way for responsible AI deployment in critical sectors.

Keyphrases: Accountability, Large Language Models (LLMs), causal reasoning, transparency

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15143,
  author    = {Docas Akinyele and Godwin Olaoye},
  title     = {Investigating Causal Reasoning in Emerging LLM Architectures},
  howpublished = {EasyChair Preprint 15143},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser