Abstract

Complex machine learning models perform better. However, we consider these models as black boxes. That’s where Explainable AI (XAI) comes into play. Understanding why a model makes a specific prediction can be as crucial as its accuracy for many applications, researchers, and decision-makers. In many real-world applications, the explainability and transparency of AI systems are indispensable. The research community and industry are giving growing attention to explainability and explainable AI. Compared to traditional machine learning methods, deep neural networks (DNNs) have been very successful. DNNs are comparably weak in explaining their inference processes and results because the data input passes through many layers, and a single prediction can involve millions of mathematical operations. It is difficult for humans to follow the exact mapping from data input to the predicted result. We would have to consider millions of weights that interact in a complex way to understand a prediction by a neural network. To interpret the behavior and predictions of neural networks, we need specific interpretation methods. Thus, a new frontier is opening for researchers.

Keywords

Artificial intelligenceComputer scienceGeography

Affiliated Institutions

Related Publications

Publication Info

Year
2021
Type
report
Citations
543
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

543
OpenAlex

Cite This

Mazharul Hossain (2021). Explainable Artificial Intelligence (XAI). Zenodo (CERN European Organization for Nuclear Research) . https://doi.org/10.5281/zenodo.6371435

Identifiers

DOI
10.5281/zenodo.6371435

Data Quality

Data completeness: 77%