Trust is an important factor in people's interactions with AI systems.
H...
Concept-based explanations for convolutional neural networks (CNNs) aim ...
Many visualization techniques have been created to help explain the beha...
Most interpretability research focuses on datasets containing thousands ...
Despite the proliferation of explainable AI (XAI) methods, little is
und...
Concept-based interpretability methods aim to explain deep neural networ...
Gender biases are known to exist within large-scale visual datasets and ...
Deep learning models have achieved remarkable success in different areas...
As machine learning is increasingly applied to high-impact, high-risk
do...
While deep learning models often achieve strong task performance, their
...
Convolutional neural networks (CNN) are known to learn an image
represen...
With the recent wave of progress in artificial intelligence (AI) has com...
Saliency methods seek to explain the predictions of a model by producing...
Self-supervised learning has advanced rapidly, with several results beat...
Deep networks for visual recognition are known to leverage "easy to
reco...
The different families of saliency methods, either based on contrastive
...
The problem of attribution is concerned with identifying the parts of an...
In an effort to understand the meaning of the intermediate representatio...
As machine learning algorithms are increasingly applied to high impact y...
Machine learning is a field of computer science that builds algorithms t...