Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Advances in Learning with Bayesian Networks
Philippe Leray, Université de Nantes, France

The New Era of High-Functionality Computing
Henry Lieberman, Independent Researcher, United States

Agents and Semantics for Human Decision-Making
Matthias Klusch, German Research Center for Artificial Intelligence (DFKI) GmbH, Germany

The Brain Agent
Claude Frasson, Informatique et recherche operationnelle, University of Montreal, Canada

The Intelligence of Agents in Games
Pieter Spronck, Tilburg University, Netherlands

Demystifying Big Data - From Causality to Correlation
Jaap van den Herik, Leiden University, Netherlands


 

Advances in Learning with Bayesian Networks

Brief Bio
Philippe Leray graduated from a French engineering school in Computer Sciences in 1993. He also got a Ph.D. (Computer Sciences) from the University Paris 6 in 1998, about the use of Bayesian and neural networks for complex system diagnosis.Since 2007, He is a full professor at Polytech'Nantes, a pluridisciplinary French Engineering University.He has been working more intensively on Bayesian networks field for the past twelve years with interests for theory (Bayesian network structure learning, causality) and application (reliability, intrusion detection, bio-informatics).Since January 2012, he is also the head  of the "Knowledge and Decision" research group, in the Nantes Computer Science lab.This research group is structured around three main research themes, data mining (association rule mining and clustering) and machine learning (probabilistic graphical models), knowledge engineering, and knowledge visualization, with a transverse ambition, improving the performance, in terms of complexity but also "actionability" of mining and learning algorithms by integrating domain and/or user knowledge.


Abstract

Bayesian networks (BNs) are a powerful tool for graphical representation of the underlying knowledge in the data and reasoning with incomplete or imprecise observations.
BNs have been extended (or generalized) in several ways, as for instance, causal BNs, dynamic BNs, Relational BNs, ...

In this lecture, we will focus on Bayesian network learning.
BN learning can differ with respect to the task : generative model versus discriminative one ? Then, the learning task can also differ w.r.t the nature of the data : complete data, incomplete data, non i.i.d data, number of variables >> number of samples, data stream, presence of prior knowledge ...

Given the diversity of these problems, many approaches have emerged in the literature. I will present a brief panorama of those algorithms and describe our current works in this field.



 

 

The New Era of High-Functionality Computing

Henry Lieberman
Independent Researcher
United States
 

Brief Bio
Henry Lieberman has been a research scientist at the MIT Media Lab since 1987. His research interests focus on the intersection of artificial intelligence and the human interface. He directs the Lab's Software Agents group, which is concerned with making intelligent software that provides assistance to users through interactive interfaces. In 2001 he edited Your Wish is My Command, which takes a broad look at how to get away from "one size fits all" software, introducing the reader to a technology that one day will give ordinary users the power to create and modify their own programs. In addition, he is working on agents for browsing the Web and for digital photography, and has built an interactive graphic editor that learns from examples and annotation on images and video.Lieberman worked with the late Muriel Cooper, who headed the Lab's Visual Language workshop, in developing systems that support intelligent visual design; their projects involved reversible debugging and visualization for programming environments, and developing graphic metaphors for information visualization and navigation. From 1972-87, he was a researcher at the MIT Artificial Intelligence Laboratory, and subsequently joined with Seymour Papert in the group that originally developed the educational language Logo, and wrote the first bitmap and color graphics systems for Logo. He also worked with Carl Hewitt on Actors, an early object-oriented, parallel language, and developed the notion of prototype object systems and the first real-time garbage collection algorithm. He holds a doctoral-equivalent degree (Habilitation) from the University of Paris VI and was a visiting professor there in 1989-90.


Abstract
Traditional user interface design works best for applications that only have a relatively small number of operations for the user to choose from. These applications achieve usability by maintaining a simple correspondence between user goals and interface elements such as menu items or icons. But we are entering an era of high-functionality applications, where there may be hundreds or thousands of possible operations. In contexts like mobile phones, even if each individual application is simple, the combined functionality represented by the entire phone constitutes such a high-functionality command set. How are we going to manage the continued growth of high-functionality computing?
Artificial Intelligence promises some strategies for dealing with high-functionality situations. Interfaces can be oriented around the goals of the user rather than the features of the hardware or software. New interaction modalities like natural language, speech, gesture, vision,  and multi-modal interfaces can extend the interface vocabulary. End-user programming can rethink the computer as a collection of capabilities to be composed on the fly rather than a set of predefined "applications". We also need new ways of teaching users about what kind of capabilities are available to them, and how to interact in high-functionality situations.



 

 

Agents and Semantics for Human Decision-Making

Matthias Klusch
German Research Center for Artificial Intelligence (DFKI) GmbH
Germany
http://www.dfki.de/~klusch
 

Brief Bio
PD Dr Matthias Klusch is a Senior Researcher and Research Fellow of the German Research Center for Artificial Intelligence (DFKI) as well as habilitated private lecturer (PD) at the computer science department of the Saarland University in Saarbruecken, Germany. He is head of the intelligent information systems (I2S) research team of the DFKI department of agents and simulated reality. His research is concerned with theory and applications of semantic web technologies, service-oriented computing and intelligent software agents in the Internet and Web. He received his MSc (1992) and PhD (1997) from the University of Kiel and his habilitation degree (2009) in computer science from the University of the Saarland, Germany. Besides, he has been adjunct professor of computer science at the Swinburne University of Technology in Melbourne (Australia), assistant professor at the Free University of Amsterdam (Netherlands), the Chemnitz University of Technology and the University of Kiel (Germany), and a postdoctoral researcher at Carnegie-Mellon University in Pittsburgh (USA). Among other, he co-founded the German conference series on multi-agent system technologies (MATES) in 2003, was a nominee for the ACM SIGART Award for Excellence in Autonomous Agent Research in 2008, is an editorial board member of several major international journals in relevant areas, served on program committees of about 180 conferences, and has co-authored about 190 publications. Web: www.dfki.de/~klusch


Abstract

Since the 1990s, intelligent agents and multi-agent systems with their known characteristics given by the AI community have been used to intelligently support individual or groups of human users in their business or personal decision-making in a large number and variety of applications in the Internet and Web. The developed frameworks of and systems with agent-based decision support differ in the extent to which they cover the typical phases of human decision-making, the types of decisions to be made, and their effectiveness and efficiency of support. In this respect, recommender systems, automated negotiation systems, matchmaker agents, and virtual user agents or avatars are classical examples to name but a few. On the other hand, semantic web technologies have been used during the past decade for providing better decision support through means of formal ontology-based representation, discovery, integration, and sharing of relevant heterogeneous data, knowledge, and services. Both agents and semantic web have deep and common roots in AI. However, despite the tremendous advances in research and development made in each of both technology domains, the full synergetic potential of agent-based and semantics-empowered decision support in innovative applications in the future Internet remains to be unlocked.

In this talk, I will present and discuss the basic principles, potential and challenges of a single and combined use of agents and semantic technologies for intelligent decision support together with selected showcases from different application domains



 

 

The Brain Agent

Claude Frasson
Informatique et recherche operationnelle, University of Montreal
Canada
http://www.iro.umontreal.ca/~frasson/index.php
 

Brief Bio
Claude Frasson is a Professor in Computer Science at University of Montreal since 1983, Head of the HERON laboratory and GRITI inter-university Research Group involving seven universities in Quebec. His research interests are at the convergence of Artificial Intelligence, Education, Human Computer Interaction. He coordinated the SAFARI project, a multidisciplinary research effort involving an industrial approach for building, distributing and controlling intelligent courses on the Web and is at the origin of a patent on a distance learning architecture based on networked cognitive agents. Since 2004, he aims to understand how the human brain functions with emotions, considering that they play an important role in knowledge acquisition, and subconscious learning, using EEG systems. Holding a doctorat d’Etat in Computer Science (1981) he was also a scientific adviser with United Nations in New York, and the World Bank in Washington.


Abstract

Recent research has demonstrated the importance of cognition, motivation and especially of the emotion in the learning process. The link between emotion and learning appears fundamental and to measure the emotional behaviour several physiological devices have been used  such as seats positions, skin conductivity, face reading, sensitive mouse, blood pressure, or even cardio rhythms. Usually, these sensors measure physical reactions and external appearance of the learner.

New environments have emerged with new sensors such as electroencephalogram (EEG). They allow to save the brain activity of the learners during their learning tasks and to detect their emotions. This electrical activity can play an important role not only in the detection of brain states but also to help the learner to be in the best conditions of learning. The Brain itself can then be considered as an important agent. In   measuring mental activity we could induce pedagogical advices or adequate emotional environments to strengthen learning. We will illustrate how these components open new perspectives in various domains



 

 

The Intelligence of Agents in Games

Pieter Spronck
Tilburg University
Netherlands
 

Brief Bio
Pieter Spronck graduated from the Delft University of Technology in Computer Sciences in 1996. He received his Ph.D. degree in Artificial Intelligence from Maastricht University in 2005, with a thesis that focuses on adaptive artificial intelligence of virtual characters in video games. This work made him one of the founders of the "adaptive video game AI" research domain. Currently, he is an associate professor in artificial intelligence and knowledge engineering at the Tilburg center of Cognition and Communication of Tilburg University, The Netherlands, where he heads the "games and knowledge-based systems" group. His research focuses on the application of machine learning techniques to the behavior of virtual characters in games and simulations. One of his specializations is "player modeling", i.e., the construction of an accurate profile of a game player by directly observing the player's behavior in the game. He has published over 100 papers on his research interests in refereed journals and conference proceedings.


Abstract
In the last decades, for many people computer games have replaced books and movies as their main source of entertainment. In games, players interact with virtual characters, which are often controlled by artificial intelligence. As these virtual characters act autonomously in their virtual environment, they can be thought of as situated agents. A big challenge in controlling the behavior of these agents is that many games place heavy requirements on the use of resources and the expression of the agents' behaviors. The requirement that might be the hardest to meet, is that the agents' behavior should appear "natural" within the virtual environment. 

For commercial video games, developers often employ techniques that circumvent the autonomy of the agents ("cheating"). This may be considered acceptable from the perspective of entertainment. However, in so-called "serious games" (games for training and education), the shortcuts that commercial developers tend to use may defeat the game's purpose. Therefore, for serious games, tools and techniques that are more advanced than the commonly used ones, are a necessity.

In this talk, I will give an overview of the state-of-the-art of the implementation of virtual agents in games, and compare this with the requirements that these agents must meet. I will then discuss some of the techniques developed by academic researchers to implement the behavior of game agents, in particular concerning their ability to adapt to the dynamics of the environment, and their ability to model the behavior of other agents, including those controlled by humans.



 

 

Demystifying Big Data - From Causality to Correlation

Jaap van den Herik
Leiden University
Netherlands
 

Brief Bio
Jaap van den Herik is one of the Founders of the Netherlands Foundation of Artificial Intelligence (1981), nowadays called *BNVKI – Benelux Association for Artificial Intelligence (Honorary member). See https://ii.tudelft.nl/bnvki/?p=1790 He is a visionary professor who predicted in 1991 that machines would judge court cases and replace judges in the future (1991). He started as a frontrunner in computer-chess research (1983). He was able to attract many talented researchers and has been the successful supervisor of 93 Ph.D. researchers. As Professor of Law and Computer Science at Leiden University, Leiden, the Netherlands, he is now finalising by the supervision of four Ph.D students. For ICAART Jaap has been active in all possible roles (from participant to keynote speaker and session chair, and later to program chair and conference chair). His current research interests are: Legal Technologies, and Intelligent Systems for Law Applications, e-discovery, Computer Games, Serious Games, Big Data, Machine Learning, Adaptive Agents, Neural Networks, Information Retrieval, and e-Humanities.


Abstract

Big Data is often used to denote a collection of data that is too big to process by conventional data analysis. In many fields there is an abundance of data, for example in chess, particle physics, and astrophysics. This keynote aims to shed light on this emerging field, to show present and future technical challenges for artificial intelligence and, above all, to demystify big data from meanings and opinions that are given to them by the laymen.

Almost all available knowledge is currently stored in large databases. Traditional data extraction methods assume that the data under investigation is structured in such a way that it is already known what the relevant information is. For Big Data, it is neither known in advance what is relevant, nor how to extract the relevant data from the data banks. Thus one of the central challenges is how to identify and extract relevant data from this enormous data source.

A second crucial difference between traditional data processing and the processing of big data is their focus. The focus has shifted from establishing causal relations to detecting correlations, In practice, correlations are easy to derive from data. Moreover, they form the relevant information for users and customers. As a case in point, we mention a well-known result in crime prevention. It is found by correlations that when it is warm outside, more crimes happen around supermarkets. The causal relation between the two is not immediately obvious, but does not affect the law-enforcement policy: there should be more police around supermarkets if it is hot, regardless of the cause of the crime. The observation that correlations are replacing causal relations in many domains, is called the computational turn. Effectively, this characterize the importance of Big Data. Moreover, finding correlations requires little intelligence, artificial or otherwise.

It follows that one of the challenges of Big Data lies in the interpretation of (correlational) results: by (1) visualization of data and (2) the forming of a narrative. Visualizing the data is important to understand the correlations that are found. It is also the first step of a narrative: what story does the data tell? For example, the Google Flu database shows regional prevalence of the flu. A narrative would be to find out the underlying cause as to why this is the case.

Big Data is a promising area for future research in artificial intelligence. First of all, the identification of relevant data is a challenging task and will continue to be more challenging as the data collected grows exponentially. Next, there is a theoretical challenge: currently, the computational turn has shifted the focus to correlation. However, as the intelligence of computer programs progresses, it may become possible to reason about data and make models and derive causations from correlations again.

Below we mention ten applications, of which two will be discussed during the keynote lecture.

  1. Safety (politics, military)
  2. Public Safety (Live View)                
  3. Commerce (ads)
  4. Banking (money streams)
  5. Judiciary (CODR)
  6. Waterway transport                         
  7. Communication (twitter, phablet)
  8. Education (MOOC)
  9. Public governance
  10. Warfare (Multi Agent Systems, Socio Cognitive Models)

To conclude, if advances in AI are able to challenge the causation problem, the Big Data of today becomes the traditional data of tomorrow.



 



 


footer