loading page

Direct Adversarial Latent Estimation to Evaluate Decision Boundary Complexity in Black Box Models
  • Ashley S. Dale,
  • Lauren Christopher
Ashley S. Dale

Corresponding Author:[email protected]

Author Profile
Lauren Christopher

Abstract

A trustworthy AI model should be robust to perturbed data, where robustness correlates with the dimensionality and linearity of feature representations in the model latent space. Existing methods for evaluating feature representations in the latent space are restricted to white-box models. In this work, we introduce Direct Adversarial Latent Estimation (DALE) for evaluating the robustness of feature representations and decision boundaries for target black-box models. A surrogate latent space is created using a Variational Autoencoder (VAE) trained on a disjoint dataset from an object classification backbone, then the VAE latent space is traversed to create sets of adversarial images. An object classification model is trained using transfer learning on the VAE image reconstructions, then classifies instances in the adversarial image set. We propose that the number of times the classification changes in an image set indicates the complexity of the decision boundaries in the classifier latent space; more complex decision boundaries are found to be more robust. This is confirmed by comparing the DALE distributions to the degradation of the classifier F1 scores in the presence of adversarial attacks. This work enables the first comparisons of latent-space complexity between black box models by relating model robustness to complex decision boundaries.

17 Apr 2024Submitted to TechRxiv
24 Apr 2024Published in TechRxiv