Abstract

Due to their uncertainty and vulnerability to adversarial attacks, machine learning (ML) models can lead to severe consequences, including the loss of human life, when embedded in safety-critical systems such as autonomous vehicles. Therefore, it is crucial to assess the empirical robustness of such models before integrating them into these systems. ML model robustness refers to the ability of an ML model to be insensitive to input perturbations and maintain its performance. Against this background, the Confiance.ai research program proposes a methodological framework for assessing the empirical robustness of ML models. The framework encompasses methodological processes (guidelines) captured in Capella models, along with a set of supporting tools. This paper aims to provide an overview of this framework and its application in an industrial setting.

Keywords

Adversarial systemAdversaryMNIST databaseComputer scienceRobustness (evolution)Artificial intelligenceDeep neural networksDeep learningArtificial neural networkMachine learningThreat modelThrough-the-lens meteringComputer securityEngineeringLens (geology)

Related Publications

Network In Network

Abstract: We propose a novel deep network structure called In Network (NIN) to enhance model discriminability for local patches within the receptive field. The conventional con...

2014 arXiv (Cornell University) 1037 citations

Publication Info

Year
2025
Type
preprint
Citations
4319
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

4319
OpenAlex

Cite This

Aleksander Ma̧dry, Aleksandar Makelov, Ludwig Schmidt et al. (2025). On Assessing ML Model Robustness: A Methodological Framework (Academic Track). Dagstuhl Research Online Publication Server . https://doi.org/10.4230/oasics.saia.2024.1

Identifiers

DOI
10.4230/oasics.saia.2024.1