posted on 2022-11-27, 23:35authored byYi Mei, Qi Chen, Andrew LensenAndrew Lensen, Bing Xue, Mengjie Zhang
Explainable artificial intelligence has received great interest in the recent decade, due to its importance in critical application domains such as self-driving cars, law and healthcare. Genetic programming is a powerful evolutionary algorithm for machine learning. Compared with other standard machine learning models such as neural networks, the models evolved by GP tend to be more interpretable due to their model structure with symbolic components. However, interpretability has not been explicitly considered in genetic programming until recently, following the surge in popularity of explainable artificial intelligence. This paper provides a comprehensive review of the studies on genetic programming that can potentially improve the model interpretability, both explicitly and implicitly, as a byproduct. We group the existing studies related to explainable artificial intelligence by genetic programming into two categories. The first category considers the intrinsic interpretability, aiming to directly evolve more interpretable (and effective) models by genetic programming. The second category focuses on posthoc interpretability, which uses genetic programming to explain other black-box machine learning models, or explain the models evolved by genetic programming by simpler models such as linear models. This comprehensive survey demonstrates the strong potential of genetic programming for improving the interpretability of machine learning models and balancing the complex trade-off between model accuracy and interpretability.
Funding
Faculty Research Establishment Grant 2020: Lensen, Andrew (Extended M MacGillivray Jul-22) | Funder: VP RESEARCH
Lensen, A - FSRG 2021 | Funder: VP RESEARCH
History
Preferred citation
Mei, Y., Chen, Q., Lensen, A., Xue, B. & Zhang, M. (n.d.). Explainable Artificial Intelligence by Genetic Programming: A Survey. IEEE Transactions on Evolutionary Computation.