Hierarchical Variational Autoencoders For Visual Counterfactuals

Abstract

Conditional Variational Auto Encoders (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool. The codes in the latent space provide a theoretically sound way to produce counterfactuals, i.e. alterations resulting from an intervention on a targeted semantic feature. To be applied to real images more complex models are needed, such as Hierarchical CVAE. This comes with a challenge as the naive conditioning is no longer effective. In this paper, we show how relaxing the effect of the posterior leads to successful counterfactuals and we introduce VAEX1 a Hierarchical VAE designed for this approach that can visually audit a classifier in applications.

Publication
2021 IEEE International Conference on Image Processing (ICIP)
Nicolas Vercheval
Doctoral researcher

My current research interests include quaternion neural networks, geometric deep learning, and mesh processing.