Download PDFOpen PDF in browserExplainable Deep Learning Models in Medical ImagingEasyChair Preprint 1383415 pages•Date: July 6, 2024AbstractMedical imaging has significantly benefited from advancements in deep learning, leading to improved diagnostic accuracy and efficiency. However, the opacity of deep learning models has hindered their broader acceptance in the clinical setting. Explainable deep learning models address this issue by providing insights into model decision-making processes, ensuring transparency, reliability, and trustworthiness in medical diagnostics. Objectives: This research aims to explore the development and application of explainable deep learning models in medical imaging. The primary objectives are:
Methods: The research will adopt a multi-phase approach encompassing literature review, methodology development, and empirical validation. Initially, a systematic review of existing literature will be conducted to categorize and analyze current explainability techniques such as saliency maps, attention mechanisms, and concept attribution methods. Building on this foundation, novel approaches or enhancements to existing methods will be developed to address identified gaps. These methodologies will be integrated into popular deep learning architectures used in medical imaging, such as convolutional neural networks (CNNs) and transformers. Experiments will be conducted using diverse medical imaging datasets, including but not limited to, MRI, CT, and X-ray images. Keyphrases: Attention Mechanisms, Clinical Trustworthiness., Convolutional Neural Networks, Diagnostic Accuracy, Explainable AI, Medical Imaging, deep learning, interpretability, saliency maps
|