Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Con Espressione! AI, Machine Learning, and Musical Expressivity
Gerhard Widmer, Johannes Kepler University and LIT | AI Lab, Linz Institute of Technology, Austria

Bioinspired Soft Robotics: Nature-inspired Solutions for Artificial Intelligent Systems
Barbara Mazzolai, Istituto Italiano di Tecnologia, Italy

From Probabilistic Circuits to Probabilistic Programs and Back
Guy Van Den Broeck, UCLA, United States

Explainable Machine Learning for Trustworthy AI
Fosca Giannotti, ISTI-CNR, Italy

 

Con Espressione! AI, Machine Learning, and Musical Expressivity

Gerhard Widmer
Johannes Kepler University and LIT | AI Lab, Linz Institute of Technology
Austria
 

Brief Bio
Gerhard Widmer is Professor and Head of the Institute of Computational Perception at Johannes Kepler University, Linz, Austria, and deputy director of the LIT AI Lab at the Linz Institute of Technology (LIT). His research interests include AI, machine learning, and intelligent audio and music processing, and his work is published in a wide range of scientific fields, from AI and machine learning to audio, multimedia, musicology, and music psychology. He is a Fellow of the European Association for Artificial Intelligence (EurAI), has been awarded Austrias highest research awards, the START Prize (1998) and the Wittgenstein Award (2009), and currently holds an ERC Advanced Grant for research on computational models of expressivity in music.


Abstract
Much of current research in Artificial Intelligence and Music,and particularly in the field of Music Information Retrieval (MIR), focuses on algorithms that interpret musical signals and recognise musically relevant objects and patterns at various levels - from notes to beats and rhythm, to melodic and harmonic patterns and higher-level structure -, with the goal of supporting novel applications in the digital music world.
This presentation will give the audience a glimpse of what computational music perception systems can currently do with music, and what this is good for. However, we will also find that while some of these capabilities are quite impressive, they are still far from showing (or requiring) a deeper "understanding" of music. An ongoing project will be presented that aims to take AI & music research a step further, going beyond the surface and focusing on the *expressive* aspects, and how these are communicated in music.
We will look at recent work on computational models of expressive music performance and some examples of the state of the art, and will discuss possible applications of this research. In the process, the audience will be subjected to a little experiment which may, or may not, yield a surprising result.



 

 

Bioinspired Soft Robotics: Nature-inspired Solutions for Artificial Intelligent Systems

Barbara Mazzolai
Istituto Italiano di Tecnologia
Italy
 

Brief Bio
Barbara Mazzolai is Director of the Center for Micro-BioRobotics (CMBR) of the Istituto Italiano di Tecnologia (IIT), and Principal Investigator of the Bioinspired Soft Robotics Research Line. Her research activity is primary in the fields of biologically-inspired robotics and soft robotics, combining biology and engineering for advancing technological innovation and scientific knowledge. She graduated (MSc) in Biology (with Honours) at the University of Pisa, Italy, and received the Ph.D. in Microsystem Engineering from the University of Rome Tor Vergata. She was Assistant Professor in Biomedical Engineering at SSSA from 1999 to 2009. From November 2009 to February 2011 she was Team Leader at the CMBR. She was Deputy Director for the Supervision and Organization of the IIT Centers Network from July 2012 to 2017. From January to July 2017 she was Visiting Faculty at Aerial Robotics Lab, Department of Aeronautics, of Imperial College of London. She is member of the Scientific Advisory Board of the Max Planck Institute for Intelligent Systems (Tübingen and Stuttgart, Germany, since 2016) and member of the Advisory Committee of the Cluster on Living Adaptive and Energy-autonomous Materials Systems - livMatS (Freiburg, Germany, since 2019). From 2012 to 2015 she was the Coordinator of the EU-funded FET-Open PLANTOID project (FP7-293431). Currently she is the Coordinator of the EU-funded FET-Proactive GrowBot Project (H2020-824074) and of the EU-funded FET-Proactive I-Seed Project (H2020-101017940). Starting in January 2020, she is also a National Geographic Explorer with the grant "RoKER" (Roots of Knowledge with Environmental Robots). She is author and co-author of more than 270 papers appeared in international journals, books, and conference proceedings, and in 2019, she has published her book “La Natura Geniale” (ed. Longanesi).


Abstract
During the last decades, the field of robotics has been increasingly enriched by the use of biomimetics. The large variety of organisms, with smart structures and functions developed during their evolution and adaptation to different environments, represents a huge source of inspiration for the design of innovative sustainable materials and technologies.
Therefore, in the era of digitalization, virtual reality, and automation that we are living, the relationship between robotics, bioinspired design, and bio-robotic hybrid systems is getting much stronger.
Fabrication methods and addictive manufacturing are experimented as novel tools by the creativity of roboticists, while others are challenging robotically-augmented design, where robots can adapt and evolve with the world around them.
Soft, multi-functional, adaptable and growing structures, taking inspiration from the plant kingdom and from the strategies of some animal species, have started a novel trend for biomimetic robotics with relevant implications towards more sustainable applications.
Here, I will present our recent results in the field of octopus-like systems and plant-like self-creating robots. The potential impact on society of these bioinspired robots could be huge and wide, e.g., in rescue, medical applications, space, or environmental monitoring. Since the design of these robotic solutions is deeply based on a few selected biological features, a new view of robots for biology can be envisaged, with the goal to give insights on the organisms themselves and open new exciting opportunities both in science and engineering.



 

 

From Probabilistic Circuits to Probabilistic Programs and Back

Guy Van Den Broeck
UCLA
United States
 

Brief Bio
Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, KR, and AAAI (honorable mention). He also serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.


Abstract
Probabilistic graphical models (PGMs) are a rich staple of probabilistic AI centered around variable-level (in)dependencies. In this talk I present some recent work on probabilistic models that go beyond classical PGMs, and make a radically different choice of abstraction; one that is computational. Concretely, I will discuss two up-and-coming classes of models: probabilistic circuits and probabilistic programs. Probabilistic circuits represent distributions through the computation graph of probabilistic inference. They move beyond PGMs and deep generative models by guaranteeing tractable inference for certain classes of queries, thereby enabling new solutions to some key problems in machine learning. Probabilistic programs represent distributions through higher-level primitives of computation: iteration, branching, and procedural abstraction. They move beyond PGMs by looking "inside" of the dependencies. Finally, I will illustrate how these two computational abstractions are themselves closely related, by showing how the Dice probabilistic programming language compiles probabilistic programs into probabilistic circuits for inference.



 

 

Explainable Machine Learning for Trustworthy AI

Fosca Giannotti
ISTI-CNR
Italy
 

Brief Bio
Fosca Giannotti, director of research of computer science at the Information Science and Technology Institute “A. Faedo” of CNR, Pisa, Italy. Fosca leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, founded in 1994, one of the earliest research lab on data mining. Her research interests are on social data mining, social network analysis, privacy-preserving data mining, more recently on human centered AI and trustworthy AI. She coordinates SoBigData, a RI on Big Data Analytics and Social Mining http://www.sobigdata.eu, an ecosystem of 32 European research centres on Data Science. She is the PI of ERC AG entitled XAI – Science and technology for the explanation of AI decision making. On March 2019 she has been featured as one of the 19 Inspiring women in AI, Big Data, Data Science, Machine Learning by KDnuggets.com, the  US leading site on AI, Data Mining and Machine Learning.


Abstract
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.



footer