Download PDFOpen PDF in browserCurrent versionBridging the Gap: How Neuro-Evolutionary Methods Enhance Explainable AIEasyChair Preprint 14329, version 19 pages•Date: August 7, 2024AbstractIn the evolving landscape of artificial intelligence (AI), the need for explainable AI (XAI) has become increasingly critical, particularly in high-stakes domains where decisions must be transparent and interpretable. This article explores the intersection of neuro-evolutionary methods and XAI, highlighting how the former can bridge the gap between complex AI models and their comprehensibility. Neuro-evolutionary algorithms, which simulate the process of natural selection to optimize neural networks, offer a unique approach to enhancing the explainability of AI systems. By evolving neural architectures that are inherently more interpretable, these methods can produce models that are not only accurate but also understandable by human stakeholders. This paper delves into the mechanisms by which neuro-evolutionary techniques contribute to XAI, presenting case studies and examples from various applications. Furthermore, it discusses the potential benefits, challenges, and future directions of integrating neuro-evolutionary approaches in the development of explainable AI, ultimately aiming to foster greater trust and adoption of AI technologies across different sectors. Keyphrases: AI Interpretability, AI Transparency, AI explainability, Artificial Intelligence., Evolutionary Algorithms, Explainable AI (XAI), Glass Box Models, Model Interpretation, Neuro-Evolutionary Methods, black-box models, machine learning, neural networks
|