Repository logo
 

ENHANCING THE COMPREHENSIBILITY OF MEDICAL PREDICTION MODELS WITH KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC EXPLAINABLE AI APPROACH

Date

2024-10-31

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

A challenge in using machine learning (ML) for decision support in critical domains such as healthcare is their lack of transparency in making predictions. eXplainable Artificial Intelligence (XAI) aims to explain the underlying decision logic and feature importance of black-box ML models, to build trust in the use of ML models. Despite advance XAI approaches, the comprehensibility of black-box ML model’s explanations remains a challenge. To enhance the comprehensibility of explanations, Neuro-Symbolic (NeSy) explainability approaches incorporate external knowledge sources to provide contextual information beyond just the use of input features in generating explanations. This thesis proposes a NeSy XAI framework to enhance the comprehensibility of explanations for ML models’ predictions in clinical settings, aiming to bridge the gap between human-comprehensible explanations and those provided by traditional XAI methods. Our approach integrates data-driven and knowledge-driven methodologies to offer conceptually-salient, domain context-rich explanations. The data-driven component of our framework employs model-agnostic surrogate modeling to generate an optimized set of transparent decision paths, based on the metrics of fidelity, coverage, confidence, and compactness, to represent the decision logic of black-box models. The knowledge-driven component of our framework integrates external domain knowledge to the decision paths in terms of a Semantic Explanation Knowledge Graph (SeE-KG) to generate semantically-rich context-sensitive and compressive explanations. We developed a graph-based visualization system that allows users to query the SeE-KG in near-natural language for localized, context-specific insights and to explore dataset-wide trends. The framework’s practical application is demonstrated in the complex task of organ allocation, specifically kidney transplantation. Using a comprehensive dataset of kidney transplants sourced from the Scientific Registry of Transplant Recipients (SRTR), the framework generates explanations for graft survival predictions, highlighting the underlying factors contributing to the outcomes (graft survival or failure) across donor-recipient combinations.

Description

Keywords

Explainable AI, Neuro-symbolic XAI, Medical outcome prediction, Model-agnostic explainability, Neuro-symbolic explainability, Knowledge graphs, Post-hoc explainability, Semantic annotation, Kidney transplant prediction

Citation