posted on 2023-11-09, 03:12authored byMatt Lythe, Gabriella Mazorra de Cos, Maria Mingallon, Andrew LensenAndrew Lensen, Christopher Galloway, David Knox, Sarah Auvaa, Kaushalya Kumarasinghe
Artificial intelligence (AI) has shown great potential in many real-world
applications, for example, clinical diagnosis, self-driving vehicles, robotics
and movie recommendations. However, it can be difficult to establish trust
in these systems if little is known about how the models make predictions.
Although methods exist to provide explanations about some black box
models these are not always reliable and may be even misleading.
Explainable AI (XAI) provides a meaningful solution to this dilemma in
instances where it may be important to explain why an AI model has
taken certain actions or made recommendations. These models are
inherently interpretable, offering explanations that align with their
computations, resulting in improved accountability, fairness and less bias.
However, explainable models can also be less capable or versatile and
may decrease model accuracy when compared to more complex, less
transparent models.
The demand for explainability varies with the context. The more critical
the use case, the greater the need for interpretability. For example, the
need for interpretability in an AI based medical diagnosis system would be
significantly higher compared to one used for targeted advertisements. In
Aotearoa New Zealand there are already excellent examples of XAI
including in health, justice and the environment. The potential for many
more systems is substantial, especially when AI decisions affect people or
communities in a significant way.