Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system

2025 AI and Ethics 0 citations

Abstract

Explainability is a leading solution offered to address the challenge of AI’s black boxing. However, a lot can go wrong when trying to apply explainability, and its success is far from certain. Moreover, there is insufficient empirical data regarding the effectiveness of concrete explainability efforts. We examined an explainability scenario for an AI decision support tool under development for the early detection of cancer-related cachexia, a potentially fatal metabolic syndrome. We conducted 13 interviews with clinicians who deal with cachexia, and asked about their prior experience with AI tools, their views on explainability, and presented an explainability scenario based on the Shapley Additive Explanations (SHAP) method. Most clinicians we interviewed had limited prior experience with AI tools, and a majority of them believed that the explainability of such an AI system for the early detection of cachexia is essential. When presented with the SHAP explainability scheme, they had limited familiarity with the features that contributed to the tool’s ruling, and only a minority of the clinicians (nuclear medicine experts) stated that they could utilize these features in a meaningful manner. Paradoxically, it is the clinicians who come in contact with patients who cannot make use of this specific SHAP explanation. This study highlights the challenges of offering a hyper-selective explainability tool in clinical settings. It also shows the challenge of developing explainable-by-design AI systems.

Keywords

Artificial intelligenceCachexiaClinical AIDecision supportEthicsExplainabilityxAI

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Volume
6
Issue
1
Citations
0
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

0
OpenAlex
0
Influential

Cite This

Shaul A. Duke, Peter Sandøe, Thomas Bøker Lund et al. (2025). Hyper-selective explainability: an empirical case study of the utility of explainability in a clinical decision support system. AI and Ethics , 6 (1) . https://doi.org/10.1007/s43681-025-00837-y

Identifiers

DOI
10.1007/s43681-025-00837-y
PMID
41395256
PMCID
PMC12695993

Data Quality

Data completeness: 81%