IAI 2025 Abstracts


Area 1 - IAI

Full Papers
Paper Nr: 5
Title:

Upside-Down Reinforcement Learning for More Interpretable Optimal Control

Authors:

Juan Cardenas-Cartagena, Massimiliano Falzari, Marco Zullich and Matthia Sabatelli

Abstract: Model-Free Reinforcement Learning (RL) algorithms either learn how to map states to expected rewards or search for policies that can maximize a certain performance function. Model-Based algorithms instead, aim to learn an approximation of the underlying model of the RL environment and then use it in combination with planning algorithms. Upside-Down Reinforcement Learning (UDRL) is a novel learning paradigm that aims to learn how to predict actions from states and desired commands. This task is formulated as a Supervised Learning problem and has successfully been tackled by Neural Networks (NNs). In this paper, we investigate whether function approximation algorithms other than NNs can also be used within a UDRL framework. Our experiments, performed over several popular optimal control benchmarks, show that tree-based methods like Random Forests and Extremely Randomized Trees can perform just as well as NNs with the significant benefit of resulting in policies that are inherently more interpretable than NNs, therefore paving the way for more transparent, safe, and robust RL.
Download

Paper Nr: 7
Title:

Interactive Problem-Solving with Humanoid Robots and Non-Expert Users

Authors:

Duygun Erol Barkana, Mattias Wahde and Minerva Suvanto

Abstract: We study conversational interaction between a humanoid (NAO) robot and a human user, using a glass-box dialogue manager (DAISY). The aim is to investigate how such an interaction can be organized in order for a non-expert user to be able to interact with the robot in a meaningful way, in this case solving a scheduling task. We compare two kinds of experiments, those that involve the NAO robot and the dialogue manager, and those that involve only the dialogue manager. Our findings show a rather clear preference for the setup involving the robot. Moreover, we study the level of linguistic variability in task-oriented human-machine interaction, and find that, at least in the case considered here, most dialogues can be handled well using a small number of patterns (for matching user input) in the agent.
Download

Paper Nr: 9
Title:

Glass-box Automated Driving: Insights and Future Trends

Authors:

Mauro Bellone, Raivo Sell and Ralf-Martin Soe

Abstract: Automated driving has advanced significantly through the use of black-box AI models, particularly in perception tasks. However, as these models have grown, concerns over the loss of explainability and interpretability have emerged, prompting a demand for creating ’glass-box’ models. Glass-box models in automated driving aim to design AI systems that are transparent, interpretable, and explainable. While such models are essential for understanding how machines operate, achieving perfect transparency in complex systems like autonomous driving may not be entirely practicable nor feasible. This paper explores arguments on both sides, suggesting a shift of the focus towards balancing interpretability and performance rather than considering them as conflicting concepts.
Download

Paper Nr: 11
Title:

A Modular Framework for Knowledge-Based Servoing: Plugging Symbolic Theories into Robotic Controllers

Authors:

Malte Huerkamp, Kaviya Dhanabalachandran, Mihai Pomarlan, Simon Stelter and Michael Beetz

Abstract: This paper introduces a novel control framework that bridges symbolic reasoning and task-space motion control, enabling the transparent execution of household manipulation tasks through a tightly integrated reasoning and control loop. At its core, this framework allows any symbolic theory to be ”plugged in” with a reasoning module to create interpretable robotic controllers. This modularity makes the framework flexible and applicable to a wide range of tasks, providing traceable feedback and human-level interpretability. We demonstrate the framework using a qualitative theory for pouring with a defeasible reasoner, showcasing how the system can be adapted to variations in task requirements, such as transferring liquids, draining mixtures, or scraping sticky materials.
Download

Paper Nr: 13
Title:

Extraction of Semantically Coherent Rules from Interpretable Models

Authors:

Parisa Mahya and Johannes Fürnkranz

Abstract: With the emergence of various interpretability methods, the quality of the interpretable models in terms of understandability for humans is becoming dominant. In many cases, interpretability is measured by convenient surrogates, such as the complexity of the learned models. However, it has been argued that interpretability is a multi-faceted concept, with many factors contributing to the degree to which a model can be considered to be interpretable. In this paper, we focus on one particular aspect, namely semantic coherence, i.e., the idea that the semantic closeness or distance of the concepts used in an explanation will also impact its perceived interpretability. In particular, we propose a novel method, Cognitively biased Rule-based Interpretations from Explanation Ensembles (CORIFEE-Coh), which focuses on the semantic coherence of the rule-based explanations with the goal of improving the human understandability of the explanation. CORIFEE-Coh operates on a set of rule-based models and converts them into a single, highly coherent explanation. Our approach is evaluated on multiple datasets, demonstrating improved semantic coherence and reduced complexity while maintaining predictive accuracy in comparison to the given interpretable models.
Download

Paper Nr: 15
Title:

From Interpretability to Clinically Relevant Linguistic Explanations: The Case of Spinal Surgery Decision-Support

Authors:

Alexander Berman, Eleni Gregoromichelaki and Catharina Parai

Abstract: Interpretable models are advantageous when compared to black-box models in the sense that their predictions can be explained in ways that are faithful to the actual reasoning steps performed by the model. However, interpretability does not automatically make AI systems aligned with how explanations are typically communicated in human language. This paper explores the relationship between interpretability and linguistic explanation needs of human users for a particular class of interpretable AI, namely generalized linear models (GLMs). First, a linguistic corpus study of patient-doctor dialogues is performed, resulting in insights that can inform the design of clinically relevant explanations of model predictions. A method for generating natural-language explanations for GLM predictions in the context of spinal surgery decision-support is then proposed, informed by the results of the corpus analysis. Findings from evaluating the proposed approach through a design workshop with orthopaedic surgeons are also presented.
Download