Abstract
Complex machine learning models perform better. However, we consider these models as black boxes. That’s where Explainable AI (XAI) comes into play. Understanding why a model makes a specific prediction can be as crucial as its accuracy for many applications, researchers, and decision-makers. In many real-world applications, the explainability and transparency of AI systems are indispensable. The research community and industry are giving growing attention to explainability and explainable AI. Compared to traditional machine learning methods, deep neural networks (DNNs) have been very successful. DNNs are comparably weak in explaining their inference processes and results because the data input passes through many layers, and a single prediction can involve millions of mathematical operations. It is difficult for humans to follow the exact mapping from data input to the predicted result. We would have to consider millions of weights that interact in a complex way to understand a prediction by a neural network. To interpret the behavior and predictions of neural networks, we need specific interpretation methods. Thus, a new frontier is opening for researchers.
Keywords
Affiliated Institutions
Related Publications
Causability and explainability of artificial intelligence in medicine
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented co...
Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to?
Evolutionary fuzzy systems are one of the greatest advances within the area of computational intelligence. They consist of evolutionary algorithms applied to the design of fuzzy...
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper...
Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems
Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and ...
All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
Variable importance (VI) tools describe how much covariates contribute to a prediction model's accuracy. However, important variables for one well-performing model (for example,...
Publication Info
- Year
- 2021
- Type
- report
- Citations
- 543
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.5281/zenodo.6371435