Counterfactual explanation is a popular eXplainable AI technique, that gives contrastive explanations to answer potential "what-if"questions about the workings of machine learning models. However, research into how explanations are understood by human beings has shown that an optimal explanation should be both selected and social, providing multiple varying explanations for the same event that allow a user to select specific explanations based on prior beliefs and cognitive biases. In order to provide such explanations, a Rashomon set of explanations can be created: A set of explanations utilising different features in the data. Current work to generate counterfactual explanations does not take this need into account, only focusing on producing a single optimal counterfactual.This work presents a novel method for generating a diverse Rashomon set of counterfactual explanations using the final population from a Particle Swarm Optimisation (PSO) algorithm. It explores a selection of PSO niching algorithms for PSO and evaluates the best algorithm to produce these sets. Finally, the ability of this method to be implemented and trusted by users is discussed.
Funding
Lensen, A - FSRG 2021 | Funder: VP RESEARCH
History
Preferred citation
Andersen, H., Lensen, A., Browne, W. & Mei, Y. (2023, July). Producing Diverse Rashomon Sets of Counterfactual Explanations with Niching Particle Swarm Optimization Algorithms. In GECCO 2023 - Proceedings of the 2023 Genetic and Evolutionary Computation Conference (pp. 393-401). https://doi.org/10.1145/3583131.3590444