Download PDFOpen PDF in browser

Explainable AI in Network Anomaly Detection: Enhancing Transparency and Trust

EasyChair Preprint 14124

20 pagesDate: July 25, 2024

Abstract

Network anomaly detection plays a crucial role in ensuring the security and reliability of computer networks. With the rapid advancement of Artificial Intelligence (AI) techniques, the use of AI algorithms, particularly deep learning models, has shown great promise in detecting network anomalies. However, the lack of transparency and interpretability of these AI models has raised concerns regarding their trustworthiness and acceptance in practical applications.

This research article aims to explore the concept of explainable AI in the context of network anomaly detection. It highlights the importance of transparency and interpretability in AI models, especially when applied to critical systems such as network security. The article discusses various techniques and approaches that can be employed to enhance the explainability of AI-based network anomaly detection systems.

Furthermore, this study emphasizes the benefits of explainable AI in improving trust and acceptance among users, network administrators, and other stakeholders. By providing clear explanations of how AI models detect network anomalies, these systems can foster a deeper understanding of the underlying processes and enhance the confidence in their outputs.

Keyphrases: AI algorithms, AI-based network, Anomaly Detection Systems

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:14124,
  author    = {Brown Klinton and Axel Egon and Sabir Kashar},
  title     = {Explainable AI in Network Anomaly Detection: Enhancing Transparency and Trust},
  howpublished = {EasyChair Preprint 14124},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser