ENHANCING THE COMPREHENSIBILITY OF MEDICAL PREDICTION MODELS WITH KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC EXPLAINABLE AI APPROACH
dc.contributor.author | Rad, Jaber | |
dc.contributor.copyright-release | Not Applicable | |
dc.contributor.degree | Doctor of Philosophy | |
dc.contributor.department | Faculty of Computer Science | |
dc.contributor.ethics-approval | Not Applicable | |
dc.contributor.external-examiner | Dr. Enea Parimbelli | |
dc.contributor.manuscripts | Not Applicable | |
dc.contributor.thesis-reader | Dr. Samina Abidi | |
dc.contributor.thesis-reader | Dr. Hassan Sajjad | |
dc.contributor.thesis-supervisor | Dr. Syed Sibte Raza Abidi | |
dc.contributor.thesis-supervisor | Dr. Karthik Tennankore | |
dc.date.accessioned | 2024-11-04T15:45:55Z | |
dc.date.available | 2024-11-04T15:45:55Z | |
dc.date.defence | 2024-10-15 | |
dc.date.issued | 2024-10-31 | |
dc.description.abstract | A challenge in using machine learning (ML) for decision support in critical domains such as healthcare is their lack of transparency in making predictions. eXplainable Artificial Intelligence (XAI) aims to explain the underlying decision logic and feature importance of black-box ML models, to build trust in the use of ML models. Despite advance XAI approaches, the comprehensibility of black-box ML model’s explanations remains a challenge. To enhance the comprehensibility of explanations, Neuro-Symbolic (NeSy) explainability approaches incorporate external knowledge sources to provide contextual information beyond just the use of input features in generating explanations. This thesis proposes a NeSy XAI framework to enhance the comprehensibility of explanations for ML models’ predictions in clinical settings, aiming to bridge the gap between human-comprehensible explanations and those provided by traditional XAI methods. Our approach integrates data-driven and knowledge-driven methodologies to offer conceptually-salient, domain context-rich explanations. The data-driven component of our framework employs model-agnostic surrogate modeling to generate an optimized set of transparent decision paths, based on the metrics of fidelity, coverage, confidence, and compactness, to represent the decision logic of black-box models. The knowledge-driven component of our framework integrates external domain knowledge to the decision paths in terms of a Semantic Explanation Knowledge Graph (SeE-KG) to generate semantically-rich context-sensitive and compressive explanations. We developed a graph-based visualization system that allows users to query the SeE-KG in near-natural language for localized, context-specific insights and to explore dataset-wide trends. The framework’s practical application is demonstrated in the complex task of organ allocation, specifically kidney transplantation. Using a comprehensive dataset of kidney transplants sourced from the Scientific Registry of Transplant Recipients (SRTR), the framework generates explanations for graft survival predictions, highlighting the underlying factors contributing to the outcomes (graft survival or failure) across donor-recipient combinations. | |
dc.identifier.uri | https://hdl.handle.net/10222/84687 | |
dc.language.iso | en_US | |
dc.subject | Explainable AI | |
dc.subject | Neuro-symbolic XAI | |
dc.subject | Medical outcome prediction | |
dc.subject | Model-agnostic explainability | |
dc.subject | Neuro-symbolic explainability | |
dc.subject | Knowledge graphs | |
dc.subject | Post-hoc explainability | |
dc.subject | Semantic annotation | |
dc.subject | Kidney transplant prediction | |
dc.title | ENHANCING THE COMPREHENSIBILITY OF MEDICAL PREDICTION MODELS WITH KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC EXPLAINABLE AI APPROACH |