Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Responsible Agency
Carles Sierra, IIIA-CSIC, Spain

Reasoning on Data: Challenges and Application
Marie-Christine Rousset, Université Grenoble-Alpes and Institut Universitaire de France, France

A 20-Year Roadmap for AI Research
Bart Selman, Cornell University, United States

Testing and Training Theory of Mind for Hybrid Human-agent Environments
Rineke Verbrugge, Bernoulli Institute, University of Groningen, Netherlands

 

Responsible Agency

Carles Sierra
IIIA-CSIC
Spain
 

Brief Bio
Carles Sierra is a Research Professor of the Artificial Intelligence Research Institute (IIIA-CSIC) in the area of Barcelona. He is currently the Director of the Institute. He received his PhD in Computer Science from the Technical University of Barcelona (UPC) in 1989. He has been doing research on Artificial Intelligence topics since then. He has been visiting researcher at Queen Mary and Westfield College in London (1996-1997) and at the University of Technology in Sydney for extended periods between 2004 and 2012. He is also an Adjunct Professor of the Western Sydney University. He has taught postgraduate courses on different Ai topics at several Universities: Université Paris Descartes, University of Technology, Sydney, Universitat Politècnica de València, and Universitat Autònoma de Barcelona among others. He has contributed to agent research in the areas of negotiation, argumentation-based negotiation, computational trust and reputation, team formation, and electronic institutions. These contributions have materialised in more than 300 scientific publications. His current focus of work gravitates around the use of AI techniques for Education and on social applications of AI. Also, he has served the research community of MAS as General Chair of the AAMAS conference in 2009, Program Chair in 2004, and as Editor in Chief of the Journal of Autonomous Agents and Multiagent Systems (2014-2019). Also, he served the broader AI community as local chair of IJCAI 2011 in Barcelona and as Program Chair of IJCAI 2017 in Melbourne. He has been in the editorial board of nine journals. He has served as evaluator of numerous calls and reviewer of many projects of the EU research programs. He is an EurAI Fellow and was the President of the Catalan Association of AI between 1998-2002.


Abstract
The main challenge that artificial intelligence research is facing nowadays is how to guarantee the development of responsible technology. And, in particular, how to guarantee that autonomy is responsible. The social fears on the actions taken by AI can only be appeased by providing ethical certification and transparency of systems. However, this is certainly not an easy task. As we very well know in the multiagent systems field, the prediction accuracy of system outcomes has limits as multiagent systems are actually examples of complex systems. And AI will be social, there will be thousands of AI systems interacting among themselves and with a multitude of humans; AI will necessarily be multiagent.

Although we cannot provide complete guarantees on outcomes, we must be able to define with accuracy what autonomous behaviour is acceptable (ethical), to provide repair methods for anomalous behaviour and to explain the rationale of AI decisions. Ideally, we should be able to guarantee responsible behaviour of individual AI systems by construction.

I understand by an ethical AI system one that is capable of deciding what are the most convenient norms, abide by them and make them evolve and adapt. The area of multiagent systems has developed a number of theoretical and practical tools that properly combined can provide a path to develop such systems, that is, provide means to build ethical-by-construction systems: agreement technologies to decide on acceptable ethical behaviour, normative frameworks to represent and reason on ethics, and electronic institutions to operationalise ethical interactions. Along my career, I have contributed with tools on these three areas. In this keynote, I will describe a methodology to support their combination that incorporates some new ideas from law, and organisational theory.



 

 

Reasoning on Data: Challenges and Application

Marie-Christine Rousset
Université Grenoble-Alpes and Institut Universitaire de France
France
 

Brief Bio
Marie-Christine Rousset is a Professor of Computer Science at the University of Grenoble Alpes and senior member of Institut Universitaire de France. Her areas of research are Knowledge Representation, Information Integration, Pattern Mining and the Semantic Web. She has published around 100 refereed international journal articles and conference papers, and participated in several cooperative industry-university projects. She received a best paper award from AAAI in 1996, and has been nominated ECCAI fellow in 2005. She has served in many program committees of international conferences and workshops and in editorial boards of several journals. She is a co-holder of the chaire "Explainable and Responsible AI" within the MIAI Grenoble Institute on AI funded by the French Research program on AI.


Abstract
How to exploit knowledge to make better use of data is a timely issue at the crossroad of knowledge representation and reasoning, data management, and the semantic web.
Knowledge representation is emblematic of the symbolic approach of Artificial Intelligence based on the development of explicit logic-based models processed by generic reasoning algorithms that are founded in logic. Recently, ontologies have evolved in computer science as computational artefacts to provide computer systems with a conceptual yet computational model of a particular domain of interest. Similarly to humans, computer systems can base decisions on reasoning about domain knowledge. And humans can express their data analysis needs using terms of a shared vocabulary in their domain of interest or of expertise. In this talk, I will show how reasoning on data can help to solve in a principled way several problems raised by modern data-centered applications in which data may be ubiquitous, multi-form, multi-source and musti-scale. I will also show how models and algorithms have evolved to face scalability issues and data quality challenges.



 

 

A 20-Year Roadmap for AI Research

Bart Selman
Cornell University
United States
 

Brief Bio
Bart Selman is the Joseph C. Ford Professor of Engineering and Computer Science at Cornell University. Prof. Selman is the President-Elect of the Association for the Advancement of Artificial Intelligence (AAAI), the main international professional society for AI researchers and practitioners. He is also the co-Chair of a national study to determine the Roadmap for AI research to guide US government research investments in AI. Prof. Selman was previously at AT&T Bell Laboratories. His research interests include artificial intelligence, computational sustainability, efficient reasoning procedures, machine learning, deep learning, reinforcement learning, planning, knowledge representation, and connections between computer science and statistical physics. He has (co-)authored over 150 publications, including six best paper awards and two classic paper awards. His papers have appeared in venues spanning Nature, Science, Proc. Natl. Acad. of Sci., and a variety of conferences and journals in AI and Computer Science. He has received the Cornell Stephen Miles Excellence in Teaching Award, the Cornell Outstanding Educator Award, an NSF Career Award, and an Alfred P. Sloan Research Fellowship. He is a Fellow of the American Association for Artificial Intelligence (AAAI), a Fellow of the American Association for the Advancement of Science (AAAS), and a Fellow of the American Association for Computing Machinery (ACM). He is the recipient of the Inaugural IJCAI John McCarthy Research Award, awarded in recognition of outstanding AI researchers at their mid-career stage.


Abstract
I will summarize the findings of a recently completed, community-driven study to set research directions for AI Research for the next 20 years. Over 100 prominent AI researchers contributed to the study, which was sponsored by the US-based Computing Community Consortium (CCC), working with the National Science Foundation (NSF) and the Association for the Advancement of AI (AAAI). I co-chaired the study together with Yolanda Gil (Univ. of Southern California and AAAI President).



 

 

Testing and Training Theory of Mind for Hybrid Human-agent Environments

Rineke Verbrugge
Bernoulli Institute, University of Groningen
Netherlands
 

Brief Bio
Rineke Verbrugge is a pioneer in building bridges between logic and cognitive science. Specifically, she was the first one to combine dynamic epistemic logic and computational cognitive modelling to study theory of mind. Verbrugge holds the chair of Logic and Cognition at the University of Groningen’s Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence. She is the leader of the Multi-agent Systems research group. Verbrugge’s research spans an area covering provability logic, teamwork in multi-agent environments, and social cognition. Since her PhD on logic and the foundations of mathematics at the University of Amsterdam, Verbrugge has published more than 170 peer-reviewed international publications and a monograph, Teamwork in Multi-Agent Systems, with Barbara Dunin-Keplicz. Verbrugge has led the NWO Vici-project “Cognitive systems in interaction: Logical and computational models of higher-order social cognition”. She is one of the six principal investigators of the Gravitation project “Hybrid Intelligence: Augmenting Human Intellect” that will run in the Netherlands in 2020-2030. Verbrugge is associate editor of the Journal of Logic, Language and Information and has been program chair of several workshops, conferences and a summer school, as well as secretary of the Benelux Association for Artificial Intelligence (BNVKI) and chair of the Dutch Association for Logic. Verbrugge has won the Educator of the Year award of the Faculty of Mathematics and Natural Sciences of the University of Groningen and is an elected member of the Royal Holland Society of Sciences and Humanities.


Abstract
When engaging in social interaction, people rely on their ability to reason about other people’s mental states, including goals, intentions, and beliefs. This theory of mind ability allows them to more easily understand, predict, and even manipulate the behavior of others. People can also use their theory of mind to reason about the theory of mind of others, which allows them to understand sentences like “Alice believes that Bob does not know that she wrote a novel under pseudonym”. But while the usefulness of higher orders of theory of mind is apparent in many social interactions, empirical evidence so far suggested that people often do not use this ability spontaneously when playing games, even when doing so would be highly beneficial.

In this lecture, we discuss some experiments in which we have attempted to encourage participants to engage in higher-order theory of mind reasoning by letting them play games against computational agents: the one-shot competitive Mod game; the turn-taking game Marble Drop; and the negotiation game Colored Trails. It turns out that we can entice people to use second-order theory of mind in Marble Drop and Colored Trails, and in the Mod game even third-order theory of mind. We discuss different methods of estimating participants’ reasoning strategies in these games, some of them based only on their moves in a series of games, others based on reaction times or eye movements. In the coming era of hybrid intelligence, in which teams consist of humans, robots and software agents, it will be beneficial if the computational members of the team can diagnose the theory of mind levels of their human colleagues and even help them improve.



footer