Katalog Plus
Bibliothek der Frankfurt UAS
Bald neuer Katalog: sichern Sie sich schon vorab Ihre persönlichen Merklisten im Nutzerkonto: Anleitung.
Dieses Ergebnis aus BASE kann Gästen nicht angezeigt werden.  Login für vollen Zugriff.

Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system

Title: Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system
Authors: Duke, Shaul A.; Sandøe, Peter; Lund, Thomas Bøker; Abenavoli, Elisabetta Maria; Beyer, Thomas; Ferrara, Daria; Frille, Armin; Gruenert, Stefan; Sabri, Osama; Sciagrà, Roberto; Pepponi, Miriam; Swen, Hesse; Tönjes, Anke; Wirtz, Hubert; Yu, Josef; Sundar, Lalith Kumar Shiyam; Holm, Sune
Source: Duke , S A , Sandøe , P , Lund , T B , Abenavoli , E M , Beyer , T , Ferrara , D , Frille , A , Gruenert , S , Sabri , O , Sciagrà , R , Pepponi , M , Swen , H , Tönjes , A , Wirtz , H , Yu , J , Sundar , L K S & Holm , S 2026 , ' Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system ' , AI and Ethics , vol. ....
Publication Year: 2026
Collection: University of Copenhagen: Research / Forskning ved Københavns Universitet
Description: Explainability is a leading solution offered to address the challenge of AI’s black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool’s ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems.
Document Type: article in journal/newspaper
File Description: application/pdf
Language: English
DOI: 10.1007/s43681-025-00837-y
Availability: https://researchprofiles.ku.dk/da/publications/e4101c58-c726-4e51-be6d-39eab3577f76; https://doi.org/10.1007/s43681-025-00837-y; https://curis.ku.dk/ws/files/530678706/Hyper-selective_explainability_An_empirical_case_study_of_the_utility.pdf
Rights: info:eu-repo/semantics/openAccess
Accession Number: edsbas.8FEE04CE
Database: BASE