Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Agent-based Models for Language Adaptation and Change
Luc Steels, ICREA, Institute of Evolutionary Biology (UPF-CSIC) Barcelona, Spain

Accountability, Responsibility, Transparency - The ART of AI
Virginia Dignum, Delft University of Technology, Netherlands

Reading Agents that Hunger for Knowledge
Eduard Hovy, Carnegie Mellon University, United States

What We Do: Challenges in Individual Rationality and Its Collective Consequences
Luís Antunes, Universidade de Lisboa, Portugal

 

Agent-based Models for Language Adaptation and Change

Luc Steels
ICREA, Institute of Evolutionary Biology (UPF-CSIC) Barcelona
Spain
 

Brief Bio
Luc Steels is currently an ICREA research fellow at the Institute for Evolutionary Biology (UPF-CSIC) in Barcelona. He studied computer science at MIT (US) and became a a professor of Artificial Intelligence (AI) at the University of Brussels (VUB) in 1983. He founded the VUB AI Lab. In 1996 Steels founded the Sony Computer Science Laboratory in Paris. Steels is currently an ICREA fellow in Barcelona where he pursues his interests in modeling language evolution, more concretely, how autonomous robotic agents could develop their own language and ontologies in situated embodied interactions.


Abstract
Human natural languages and the conceptual frameworks underlying them are profoundly changing, not just for lexicon or phonology but also for grammar and semantics. Languages thus adapt their expressive power to changing needs of their language communities. They change because of fashions and new communication media (such as texting or twitter). They change also because there is no ideal solution to verbal communication so that a community will continue to navigate in a multi-criterion landscape trying to increase communicative success and decrease cognitive effort for one area while increasing complexity for another. 

I will argue that it is worthwhile to try and model language as a complex adaptive systems, both from the viewpoint of linguistic theory, which has so far failed to come up with adequate explanations for this phenomenon, and from the viewpoint of AI because this challenge pushes us to develop new formalisms for grammar, new language processing mechanisms that are more flexible than currently standard parsers and producers, new learning mechanisms that not only focus on parsing but also on producing in interaction.

The talk will be illustrated with videoclips of our experiments with humanoid robots playing language games. 
The agents are initialized with no or very limited forms of language and then develop grammars and lexicons in order to be successful in their language games. 



 

 

Accountability, Responsibility, Transparency - The ART of AI

Virginia Dignum
Delft University of Technology
Netherlands
 

Brief Bio
Virginia Dignum is Associate Professor on Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. Her research focuses on value-sensitive design of intelligent systems and multi-agent organisations, in particular on the ethical and societal impact of AI. She is Executive Director of the Delft Design for Values Institute, secretary of the International Foundation for Autonomous Agents and Multi-agent Systems (IFAAMAS), member of the Executive Committee of the IEEE Initiative on Ethics of Autonomous Systems. She was co-chair of ECAI2016, the European Conference on AI, and vice president of the BNVKI (Benelux AI Association).


Abstract
As robots and other AI systems move from being a tool to being teammates, and are increasingly making decisions that directly affect society,, many questions raise across social, economic, political, technological, legal, ethical and philosophical issues. Can machines make moral decisions? Should artificial systems ever be treated as ethical entities? What are the legal and ethical consequences of human enhancement technologies, or cyber-genetic technologies? What are the consequences of extended government, corporate, and other organisational access to knowledge and predictions concerning citizen behaviour? How can moral, societal and legal values be part of the design process? How and when should governments and the general public intervene?

Answering these and related questions requires a whole new understanding of Ethics with respect to control and autonomy, in the changing socio-technical reality. Means are needed to integrate moral, societal and legal values with technological developments in Artificial Intelligence, both within the design process as well as part of the deliberation algorithms employed by these systems. In this talk I discuss leading Ethics theories and propose alternative ways to model ethical reasoning and discuss their consequences to the design of robots and softbots. Depending on the level of autonomy and social awareness of AI systems, different methods for ethical reasoning are needed. Given that ethics are dependent on the sociocultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems.
The urgency of these issues is acknowledged by researchers and policy makers alike. Methodologies are needed to ensure ethical design of AI systems, including means to ensure accountability, responsibility and transparency (ART) in system design.



 

 

Reading Agents that Hunger for Knowledge

Eduard Hovy
Carnegie Mellon University
United States
www.cs.cmu.edu/~hovy
 

Brief Bio
Eduard Hovy is a professor at the Language Technology Institute in the School of Computer Science at Carnegie Mellon University. He holds adjunct professorships at universities in the US, China, and Canada, and is co-Director of Research for the DHS Center for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987, and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). From 1989 to 2012 he directed the Human Language Technology Group at the Information Sciences Institute of the University of Southern California. Dr. Hovy’s research addresses several areas in Natural Language Processing, including machine reading of text, question answering, information extraction, automated text summarization, the semi-automated construction of large lexicons and ontologies, and machine translation. His contributions include the co-development of the ROUGE text summarization evaluation method, the BLANC coreference evaluation method, the Omega ontology, the Webclopedia QA Typology, the FEMTI machine translation evaluation classification, the DAP text harvesting method, the OntoNotes corpus, and a model of Structured Distributional Semantics. In November 2016 his Google h-index was 67. Dr. Hovy is the author or co-editor of six books and over 400 technical articles and is a popular invited speaker. In 2001 Dr. Hovy served as President of the ACL, in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society. Dr. Hovy regularly co-teaches courses and serves on Advisory Boards for institutes and funding organizations in Germany, Italy, Netherlands, and the USA.


Abstract
True intelligent agenthood (as opposed to mere agency) is characterized by self-driven internal goal creation and prioritization. Few AI systems enjoy the freedom today to autonomously decide what to do next; even robots and planning systems start with a fairly concrete goal and stop acting when they have achieved it. In a small experimental project at CMU we have been exploring what it might mean for a Natural Language text reading engine to experience a ‘hunger for knowledge’ that drives what it chooses to read and learn about next, in an ongoing manner. There is no overall goal other than trying to increase its understanding (coverage and interpretations) of the world as described in Wikipedia. The starting point is a sketchy representation of the Infoboxes of all the people listed in Wikipedia, and the principal criterion for choosing what to read about next is the desire to minimize knowledge gaps and remove inconsistencies. In contrast to Freebase, Knowledge Graphs, and other text mining projects, internal generalization is central to our work. To implement the system we combine traditional AI frame proposition representation for the basic information (to make it readable by humans) with neural networks such as autoencoders to perform generalization and anomaly detection.



 

 

What We Do: Challenges in Individual Rationality and Its Collective Consequences

Luís Antunes
Universidade de Lisboa
Portugal
 

Brief Bio
Mr. Luis Antunes holds a PhD in Computer Science from University of Lisbon (2001). He has been a researcher in Artificial Intelligence since 1988 and published more than 80 refereed scientific papers. He was the founder and first director of the Group of Studies in Social Simulation (GUESS). Luis Antunes is on the Program Committee of some of the most important international conferences on Artificial Intelligence, Multi-Agent Systems and Social Simulation, such as AAMAS, ECAI, ESSA, WCSS and MABS. He was co-chair of the international workshops MABS'05, MABS'06, MABS'07 on Multi-Agent-Based Simulation, and co-editor of the Springer-Verlag proceedings volumes. He is or was a member of the MABS Steering Committee, of EUMAS Advisory Board, member of ESSA Management Committee and of APPIA (Portuguese Association for AI) board of directors. Antunes hosted EUMAS 2006 and AAMAS 2008, and ECAI 2010 as chair of the organising committee. He was the proponent and a co-chair of the first IJCAI workshop on Social Simulation (SS@IJCAI 2009), and the founder of the first Portuguese Workshop on Social Simulation as a Special Track of EPIA, and the Program Chair of EPIA 2011.


Abstract
I start from a notion of individual, situated and multi-varied rationality, and then integrate each individual in its own social context (which includes multiple concomitant relations and permeability from one to another). The role of the modeller is not that of an anonymous avatar for a misplaced notion of scientific objectivity. These activities have a subject, and deal with subjectivity explicitly. In particular, the source of the agents’ own motivations is not exogenous to the system. Goals are acquired through mimetic desire, a mechanism introduced by René Girard, that decomposes motivation in a particular geometric structure. These are the key ingredients to propose a framework for tackling complex social problems through simulation of autonomous computational agents. Within this framework I will present several examples of application to concrete problems. The lecture ends with a challenge for tackling a particularly shocking application field.



footer