Abstract: |
I present a Human-in-the-loop (HITL) approach to Explainable Artificial Intelligence (XAI), which is the subject of my planned PhD desideration. I argue that it allows for the incorporation of expert knowledge into Machine Learning-based Artificial Intelligence systems. In addition, HITL allows for greater engagement and accessibility of end-users, such as domain experts, eCommerce owners, etc. It could help XAI to be more widely adopted in real usage scenarios. I briefly present two of my own works that represent this approach. The first one deals with the use of XAI metrics such as Stability, Consistency and Perturbational Accuracy Loss using the Intelligible eXplainable AI framework (InXAI) package. In this work, I show how XAI metrics can improve an ensemble of classifiers. The second work is an example of a HITL prototype for clustering analysis carried out within the Knowledge Augmented Clustering (KnAC) project. It introduces an intuitive, yet under-explored in the literature, distinction between objective data and metadata and shows how it can help in creating discourse-based explanations. |