| Source: |
Duke , S A , Sandøe , P , Lund , T B , Abenavoli , E M , Beyer , T , Ferrara , D , Frille , A , Gruenert , S , Sabri , O , Sciagrà , R , Pepponi , M , Swen , H , Tönjes , A , Wirtz , H , Yu , J , Sundar , L K S & Holm , S 2026 , ' Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system ' , AI and Ethics , vol. .... |
| Description: |
Explainability is a leading solution offered to address the challenge of AI’s black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool’s ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems. |