Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Knowledge Representation & Reasoning in CRAM: A Cognitive Architecture for Robot Agents Accomplishing Everyday Manipulation Tasks
Michael Beetz, University of Bremen, Germany

Bio-Inspired AI for Autonomous Systems
Jan Seyler, Festo SE & Co. KG, Germany

Interacting with Socially Interactive Agents
Catherine Pelachaud, CNRS/University of Pierre and Marie Curie, France

 

Knowledge Representation & Reasoning in CRAM: A Cognitive Architecture for Robot Agents Accomplishing Everyday Manipulation Tasks

Michael Beetz
University of Bremen
Germany
 

Brief Bio
Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). He received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996, and his Venia Legendi from the University of Bonn in 2000. In February 2019 he received an Honorary Doctorate from Örebro University. He was vice-coordinator of the German cluster of excellence CoTeSys (Cognition for Technical Systems, 2006-2011), coordinator of the European FP7 integrating project RoboHow (web-enabled and experience-based cognitive robots that learn complex everyday manipulation tasks, 2012-2016). Currently he is the coordinator of the German collaborative research centre EASE (Everyday Activity Science and Engineering), since 2017 and is the spokesperson of the Bremen University´s high-profile Area “Minds, Media, Machines”. In February 2020 he was listed in 4th place as one of the most influential researcher in robotics in the AI 2000 ranking. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception.


Abstract
Robotic agents that can accomplish manipulation tasks with the competence of humans have been one of the grand research challenges for artificial intelligence (AI) and robotics research for more than 50 years. However, while the fields made huge progress over the years, this ultimate goal is still out of reach. I believe that this is the case because the knowledge representation and reasoning methods that have been proposed in AI so far are necessary but too abstract. In this talk I propose to address this problem by endowing robots with the capability to internally emulate and simulate their perception-action loops based on realistic images and faithful physics simulations, which are made machine-understandable by casting them as virtual symbolic knowledge bases. These capabilities allow robots to generate huge collections of machine-understandable manipulation experiences, which robotic agents can generalize into commonsense and intuitive physics knowledge applicable to open varieties of manipulation tasks. The combination of learning, representation, and reasoning will equip robots with an understanding of the relation between their motions and the physical effects they cause at an unprecedented level of realism, depth, and breadth, and enable them to master human-scale manipulation tasks. This breakthrough will be achievable by combining leading-edge simulation and visual rendering technologies with mechanisms to semantically interpret and introspect internal simulation data structures and processes. Robots with such manipulation capabilities can help us to better deal with important societal, humanitarian, and economic challenges of our aging societies.



 

 

Bio-Inspired AI for Autonomous Systems

Jan Seyler
Festo SE & Co. KG
Germany
 

Brief Bio
Jan Seyler likes to spark the fire of inspiration in people. He is 35 years old and studied mathematics with focus on scientific computing at Heidelberg University and subsequently worked on his PhD in cooperation with Daimler and the University of Erlangen in the area of real-time communication systems. 2015 he started at Festo as a embedded software developer. Within two years, he became a semantic data engineer and in 2019 Lead AI Algorithm Developer. Since 2020 he is leading the Festo AI competence team and the department for AI, controls and embedded software within Festo research.
Additionally, Jan teaches Real-Time Systems and IoT at the Baden-Wuerttemberg Cooperative State University (DHBW) Stuttgart as well as Applied AI and Advanced Data Models at Esslingen University.
In his free time, Jan likes to spend time with his family, read and go exploring.


Abstract
“Automation is a fundamental part of industry and our private lives. At Festo, our vision is to free humans from harming tasks – these might be mentally (by being boring and repetitive) or even physically harming. Today, one aspect of an automation solution is that it is engineered to a specific problem and environment and often does not interact actively with humans. For humans to naturally interact with machines, it is needed that the machines can react to changing tasks and environments. Here, AI-based solutions come into play and offer a solution. Mechanisms inspired by biology offer efficient solutions to many questions arising in this field. This keynote will show concrete challenges from within the industrial automation and the solutions Festo research came up with.”



 

 

Interacting with Socially Interactive Agents

Catherine Pelachaud
CNRS/University of Pierre and Marie Curie
France
 

Brief Bio
Catherine Pelachaud received the Ph.D. degree in computer science from University of Pennsylvania, Philadelphia, PA, USA, in 1991. She is currently a CNRS director of research in the laboratory ISIR, Sorbonne University, where her research encompasses socially interactive agents, modeling of nonverbal communication and expressive behaviors. She has authored more than 200 articles. She is and was associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and International Journal of Human-Computer Studies. She has co-edited several books on virtual agents and emotion-oriented systems. She participated to the organization of international conferences such as IVA, ACII and AAMAS, virtual agent track. She was the recipient of 4 best papers award of IVA. She is recipient of the ACM – SIGAI Autonomous Agents Research Award 2015 and was honored the title Doctor Honoris Causa of University of Geneva in 2016. Her Siggraph’94 paper received the Influential paper Award of IFAAMAS (the International Foundation for Autonomous Agents and Multiagent Systems).


Abstract
In this talk, I will present our work toward building Socially Interactive Agent SIA, that is agent able to be socially aware, interact with human partners, but also adapt its behaviors to favor user’s engagement during the interaction.
During an interaction, partners adapt their behaviors to each other. Adaptation can happen at different levels, such as at the linguistic one (choice of vocabulary, grammatical style), the behavior one (change of posture), etc. It can involve the choice of a given conversational strategy to manage user’s impression, or synchronization mechanisms (eg imitate a smile)… Lately we have developed three adaptation mechanisms that work on different levels: conversational strategy, nonverbal behaviors and multimodal signals. I will present the general framework on which these models have been built. As it interacts with a human partner, the agent learns to adapt to maximize the quality of the interaction. These models have been evaluated through a study involving visitors of a science museum. I will discuss the results of these studies.
Touch is a modality that is not much considered in human-agent interaction. Social touch bares many functions during an interaction. A caress can convey comfort, a tap human’s attention, etc. We have been working on endowing an agent to respond to social touch by a human and to touch the human to convey different intentions and emotions. I will describe the decision model based on the emotional model FAtiMA we have developed.



footer