Repository logo
 

ENHANCING THE COMPREHENSIBILITY OF MEDICAL PREDICTION MODELS WITH KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC EXPLAINABLE AI APPROACH

dc.contributor.authorRad, Jaber
dc.contributor.copyright-releaseNot Applicable
dc.contributor.degreeDoctor of Philosophy
dc.contributor.departmentFaculty of Computer Science
dc.contributor.ethics-approvalNot Applicable
dc.contributor.external-examinerDr. Enea Parimbelli
dc.contributor.manuscriptsNot Applicable
dc.contributor.thesis-readerDr. Samina Abidi
dc.contributor.thesis-readerDr. Hassan Sajjad
dc.contributor.thesis-supervisorDr. Syed Sibte Raza Abidi
dc.contributor.thesis-supervisorDr. Karthik Tennankore
dc.date.accessioned2024-11-04T15:45:55Z
dc.date.available2024-11-04T15:45:55Z
dc.date.defence2024-10-15
dc.date.issued2024-10-31
dc.description.abstractA challenge in using machine learning (ML) for decision support in critical domains such as healthcare is their lack of transparency in making predictions. eXplainable Artificial Intelligence (XAI) aims to explain the underlying decision logic and feature importance of black-box ML models, to build trust in the use of ML models. Despite advance XAI approaches, the comprehensibility of black-box ML model’s explanations remains a challenge. To enhance the comprehensibility of explanations, Neuro-Symbolic (NeSy) explainability approaches incorporate external knowledge sources to provide contextual information beyond just the use of input features in generating explanations. This thesis proposes a NeSy XAI framework to enhance the comprehensibility of explanations for ML models’ predictions in clinical settings, aiming to bridge the gap between human-comprehensible explanations and those provided by traditional XAI methods. Our approach integrates data-driven and knowledge-driven methodologies to offer conceptually-salient, domain context-rich explanations. The data-driven component of our framework employs model-agnostic surrogate modeling to generate an optimized set of transparent decision paths, based on the metrics of fidelity, coverage, confidence, and compactness, to represent the decision logic of black-box models. The knowledge-driven component of our framework integrates external domain knowledge to the decision paths in terms of a Semantic Explanation Knowledge Graph (SeE-KG) to generate semantically-rich context-sensitive and compressive explanations. We developed a graph-based visualization system that allows users to query the SeE-KG in near-natural language for localized, context-specific insights and to explore dataset-wide trends. The framework’s practical application is demonstrated in the complex task of organ allocation, specifically kidney transplantation. Using a comprehensive dataset of kidney transplants sourced from the Scientific Registry of Transplant Recipients (SRTR), the framework generates explanations for graft survival predictions, highlighting the underlying factors contributing to the outcomes (graft survival or failure) across donor-recipient combinations.
dc.identifier.urihttps://hdl.handle.net/10222/84687
dc.language.isoen_US
dc.subjectExplainable AI
dc.subjectNeuro-symbolic XAI
dc.subjectMedical outcome prediction
dc.subjectModel-agnostic explainability
dc.subjectNeuro-symbolic explainability
dc.subjectKnowledge graphs
dc.subjectPost-hoc explainability
dc.subjectSemantic annotation
dc.subjectKidney transplant prediction
dc.titleENHANCING THE COMPREHENSIBILITY OF MEDICAL PREDICTION MODELS WITH KNOWLEDGE GRAPHS: A NEURO-SYMBOLIC EXPLAINABLE AI APPROACH

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
JaberRad2024.pdf
Size:
6.77 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.03 KB
Format:
Item-specific license agreed upon to submission
Description: