Accepted Tutorials

TutorialPresented by
Multi-Agent Reinforcement Learning and Dynamics of LearningDaan Bloembergen (University of Liverpool)
Automated Security Analysis for Multi-agent SystemsAlessandro Bruni and Carsten Schürmann (IT University of Copenhagen)
Cooperative Games in Structured EnvironmentsGeorgios Chalkiadakis (Technical University of Crete), Gianluigi Greco (University of Calabria), Evangelos Markakis (Athens University of Economics and Business)
Constraints and Changes in Argumentation: State of the Art and Challenges of Argumentation DynamicsSylvie Doutre and Jean-Guy Mailly (Université Toulouse 1 Capitole – IRIT / Université Paris Descartes – LIPADE)
Game TheoryUlle Endriss (University of Amsterdam)
Assignment Problems: From Classical Mechanism Design to Multi-Agent SystemsAris Filos-Ratsikas (University of Oxford)
Introduction to Judgment AggregationUmberto Grandi (Université Toulouse 1 Capitole - IRIT)
Probabilistic Model CheckingMarta Kwiatkowska (University of Oxford)
Specification and Verification of Multi-Agent SystemsWojciech Penczek and Michal Knapik (Polish Academy of Sciences)
Dynamic epistemic logic and its applications to plan/protocol synthesisSophie Pinchinat (Université de Rennes) and François Schwarzentruber (École normale supérieure de Rennes)
Modelling and Analysis of Cyber-Social SystemsChristian W. Probst (Technical University of Denmark)
Predicting Human Decision-Making: Tools of the Trade (EurAI lecture)Ariel Rosenfeld (Bar-Ilan University)
Engineering Machine EthicsMarija Slavkovik (University of Bergen)
Decision Making for Artificial IntelligencePaolo Turrini (Imperial College London)
 
Assignment Problems: From Classical Mechanism Design to Multi-Agent Systems
Aris Filos-Ratsikas, University of Oxford
 
Abstract
In this tutorial, I will cover the basic concepts of assignment and matching, in which there is a set of agents with preferences over a set of items and the goal is to allocate the items to the agents in a way that satisfies some desired properties. I will focus mainly on the case of one-sided matching, where each agent receives exactly one item and I will start from the classical literature in economics in which the goal is to satisfy certain normative properties such as Pareto efficiency, strategyproofness, envy-freeness and so on. I will move on to present quantitative approaches introduced in the computer science literature, such as the maximization of the utilitarian or the egalitarian welfare in several preference domains of interest (unrestricted domains, metric spaces etc). Finally, I will present more practical instances of assignment problems that are motivated by real-life applications and I will present some more recent related results from the literature of multi-agent systems.
 

Topics Covered:

  • The classical literature on one-sided matching, originating from the influential paper by Hylland and Zeckhauser
    • I will talk about normative properties such as ex-ante and ex-post Pareto efficiency, ordinal efficiency, weak and strong strategyproofness, anonymity, neutrality, non-bossiness and envy- freeness.
    • I will cover the best-known mechanisms, i.e. Random Priority, Probabilistic Serial and the Pseudo-market mechanism and explain which properties the satisfy and which ones they don’t.
    • I will talk about structural results, such as Svensson’s characterization, Zhou’s impossibility, the characterizations of Probabilistic Serial and ordinal efficiency and the more recent characterization of ordinal randomized strategyproof mechanisms.
  • Computational Results in Computer Science
    • I will talk about several computational results regarding those classical mechanisms, such as the complexity of manipulating Probabilistic Serial, the computational complexity of Random Serial dictatorship
  • Quantitative approaches in Computer Science
    • I will talk about the maximization of the utilitarian and the egalitarian welfare by several classes of matching mechanisms for unrestricted domains and metric spaces, presenting results from and other related sources.
    • I will talk about the Price of Anarchy of one-sided matching mechanisms for the utilitarian objective
  • Assignment in practice
    • I will cover several real-life applications of assignment mechanisms such as school choice and kidney exchange programs
Level and Prerequisites
The tutorial should be most suitable for graduate students with ideally some basic understanding of the main concepts in game theory, but the latter is not a hard constraint. I think the tutorial could also be accessible to interested undergraduate students with a computer science background and some familiarity with combinatorics, probabilities and computational complexity.
 
Tutor’s Bio
Aris Filos-Ratsikas is a post-doctoral Research Assistant at the University of Oxford. He received his PhD in Theoretical Computer Science in August 2015 from the University of Aarhus, and his work is primarily on social choice, assignment problems, markets and fair division.
 
 
Automated Security Analysis for Multi-agent Systems
Alessandro Bruni and Carsten Schürmann, IT-University of Copenhagen
 
Abstract
Multi-agent systems, like many distributed communicating systems, are potential targets for attackers. Unlike traditional computer networks though, they must endure conditions that our personal computers and servers are not accustomed to: they must operate using not-always available relay networks, sometimes they have bandwidth and computational restrictions, they can be tampered with and they still must be able to operate reliably at the global scale. Ensuring that they operate securely is therefore both very important and very challenging, as their security design often differs from traditional network systems. This tutorial session is intended as a crash course to automated security analysis techniques, with a special focus to the study of common patterns used in embedded and IoT protocols. We will cover the basic mechanisms of cryptographic systems and discuss how they can be composed to construct secure communicating systems; we will discuss how to reason about their security formally by introducing the symbolic Dolev-Yao security model, which is amenable to automated reasoning; finally, we will present some examples of security protocols and reason about them with tool support.
 

Topics Covered:

  • Overview of public and private key cryptography, hash functions and signature schemes.
  • Models for security: the symbolic (Dolev-Yao) and the computational(semantic) approaches.
  • Security properties: secrecy, authentication, indistinguishability.
  • Symbolical modeling of security protocols: representing security protocols as inference rules.
  • Automated reasoning of security in the symbolic model with tool support.
Level and Prerequisites
This tutorial is intended to be at the practical, introductory level and assumes no prior knowledge of cryptographic schemes, nor experience in protocol design. However, prior basic knowledge of logic and proving techniques is required to understand the material,
and some of the subtleties that make automated reasoning about security a hard problem will be presented, to give a taste to the audience.
 
Tutor’s Bio
Alessandro Bruni is Assistant Professor at the IT University of Copenhagen focusing on formal methods applied to computer security, including logic and language based approaches to reasoning about security, and the construction of secure software systems. He received his Ph.D. at the Technical University of Denmark in 2015 with a thesis on the Analysis of Security Protocols in Embedded Systems. Teaching experience: Alessandro has taught security protocol analysis as part of the Critical Systems Seminar class at the IT-University of Copenhagen.
 
Carsten Schürmann is Associate Professor at the IT University of Copenhagen focusing on information security, cryptography, language based security, formal methods, and computational social choice. He leads the DemTech research project on trustworthy democratic technologies, an interdisciplinary research effort between computer science and social science that studies trust in digital elections. In the past he was a member of the computer science faculty at Yale University. In 2002, he received the NSF CAREER award. Teaching experience: Carsten has taught four summer schools: Programming Languages Summer school in Oregon, BRICS summer school Aarhus, TYPES summer school in Giens, and VTSA Saarbruecken.
 
 
Constraints and Changes in Argumentation: State of the Art and Challenges of Argumentation Dynamics
Sylvie Doutre, IRIT, Univ. Toulouse 1 Capitole
Jean-Guy Mailly, Univ. Paris Descartes – LIPADE
 
Abstract
Argumentation dynamics has become a hot topic in recent years in the argumentation community, with several applications in the context of multi-agent systems: negotiation, persuasion between agents, online debate platforms, benefit from advances on this topic. Argumentation dynamics has in addition strong connections with well-established reasoning research areas, such as belief change. At a high level, argumentation dynamics consists in taking into account some constraint about an argumentation system (e.g. some attack must be added or some argument must be accepted) that an agent wants to enforce, and in modifying this argumentation system to make the constraint satisfied by it. In this tutorial, we review the existing approaches for argumentation dynamics. We classify them depending on the kind of constraint which is considered, and how the framework is modified to make the constraint satisfied. We show how this classification highlights open questions for future work about argumentation dynamics.
 

Topics Covered:

  • Short introduction to abstract argumentation.
  • Review of existing approaches for argumentation dynamics.
  • Typology of constraints and changes in argumentation.
  • Open questions and concluding remarks.
Level and Prerequisites
Knowledge of abstract argumentation and belief change is desirable but not mandatory. The tutorial is addressed to Master students, Ph.D. students and young researchers.
 
Tutor’s Bio
Sylvie Doutre received a Ph.D. in computer science in 2002 from Univ. of Toulouse
(France), entitled “On the preferred semantics of argumentation frameworks”. From June 2004 to August 2005, Sylvie Doutre worked as a postdoctoral researcher at the University of Liverpool (United Kingdom), in the “Agent Applications, Research and Technology” group. Since September 2005, Sylvie Doutre is an Associate Professor at the Univ. of Toulouse 1, member of the “Logic, Interaction, Language, and Computation” group at IRIT. Her research interests mainly concern argumentation-based reasoning, on its modelling and algorithmic aspects, and on its applications to agents’ interactions.
 
Jean-Guy Mailly studied computer science at Univ. Artois, France where he obtained his degrees. He prepared a Ph.D. on the topic “Dynamics of Argumentation Frameworks” at CRIL, Univ. Artois. He defended his thesis in September 2015. From October 2015 to August 2016, he worked as Project Assistant in Database and Artificial Intelligence Group from the Faculty for Informatics, TU Wien. Since September 2016, he is Assistant Professor at LIPADE, Univ. Paris Descartes, where he joined the research team Distributed Artificial Intelligence. His research focuses on abstract argumentation, belief change and belief merging.
 
 
Cooperative Games in Structured Environments
Georgios Chalkiadakis, Technical University of Crete
Gianluigi Greco, University of Calabria
Evangelos (Vangelis) Markakis, Athens University of Economics and Business
 
Abstract
The use of cooperative game theory to study how agents should cooperate and collaborate, along with the related topic of coalition formation, has received growing attention from the multiagent systems, game theory, and electronic commerce communities. The focus of much of the current work in this area has been on exploring methods by which agents can form coalitions so as to solve problems of joint interest, make group decisions, and distribute gains arising from such cooperation.
At the same time, in many real-world settings, the structure of the environment constrains the formation of coalitions among agents. These settings can be represented by coalitional games equipped with interaction graphs, that restrict agent interactions. An interaction graph determines the set of all feasible coalitions, in that a coalition C can form only if the subgraph induced over the nodes/agents in C is connected.
Such structure in the environment opens up possibilities for the efficient computation of key game-theoretic solution concepts, in conjunction with expressive game representation mechanisms. Given this, there has been recently much work on studying cooperation and cooperative games in such settings. The aim of this tutorial is to provide (a) an extensive review these approaches, and (b) intuitions and technical insight on key results in this domain. Cooperation and cooperative games has always been at the very core of multiagent systems research. Papers on cooperative games have been prominent in past AAMAS, EUMAS, and sister conferences.
 

Topics Covered:

  • Cooperative games and their computational aspects
    • basic models and concepts
    • basic solution concepts
    • representation schemes (induced subgraph games, marginal contribution nets)
    • key problems (e.g., coalition structure generation)
    • extensions and application domains
  • Cooperative games with restricted agent interactions (coalitional games over graphs)
    • interaction graph structures of interest
    • compact coalitional games
    • computing solution concepts over graphs: algorithms and complexity
    • coalition structure generation on coalitional games over graphs
    • related applications (e.g., cooperative task execution, calculating the cost of journeys in the “ridesharing” domain)
    • extensions (games with uncertainty, partition function games)
Level and Prerequisites
PhD or MSc level students; basic AI and complexity theory knowledge.
 
Tutor’s Bio
Georgios Chalkiadakis is an Associate Professor at the School of Electical and Computer Engineering, Technical University of Crete (TUC). His research interests are in the areas of multiagent systems, algorithmic game theory, and learning; and more specifically, in coalition formation, decision making under uncertainty, and reinforcement learning in multiagent domains. His PhD thesis (University of Toronto, 2007) was nominated for the IFAAMAS-2007 Victor Lesser Distinguished Dissertation Award. Georgios is the co-author of the graduate level textbook “Computational Aspects of Cooperative Game Theory” (with Michael Wooldridge and Edith Elkind). Before joining TUC, he was a Research Fellow at the School of Electronics and Computer Science, University of Southampton, UK; and has also worked as a software engineer at the Institute of Computer Science of the Foundation for Research and Technology – Hellas, and as a teacher of informatics in Greek high schools. Georgios has served in the Organizing Committee of several workshops in top AI and MAS international conferences; and has organised and chaired two AI/MAS summer schools.
 
Gianluigi Greco is an Associate Professor of Computer Science at University of Calabria. He was a visiting PhD student at the Vienna University of Technology (2003) and a visiting researcher at the Oxford University Computing Laboratory (2006). He was a research associate at the ICAR institute of the Italian National Research Council (2011 and 2012); and Invited Professor at the LAMSADE Department of the University of Paris-Dauphine (2014). His research is focused on the theoretical foundations of Artificial Intelligence and Game Theory. In these areas, he has co- authored several publications in leading international conferences and journals. He has served as a program committee member in more than 50 international conferences and workshops. In 2008, his work “Pure Nash Equilibria: Hard and Easy Cases” received the IJCAI-JAIR Best Paper Award. In 2009, he received the Marco Somalvico Award by the Italian Association for Artificial Intelligence. In 2014, he received the Kurt Gödel Research Prize in the area of Logical Foundations of Artificial Intelligence, awarded by the Kurt Gödel Society.
 
Evangelos Markakis received his PhD in Computer Science from the Georgia Institute of Technology, Atlanta, USA, in 2005. Currently, he is an Assistant Professor at the Dept. of Informatics of the Athens University of Economics and Business (AUEB). He previously held appointments as a postdoctoral researcher at the University of Toronto and at the center for Mathematics and Computer Science (CWI, Amsterdam). His research interests lie in the areas of theoretical computer science, analysis of algorithms, algorithmic game theory, mechanism design, and their applications to the design of auctions and other e-commerce environments. He has co-authored several publications in leading international conferences and journals on theoretical computer science and algorithmic game theory. He has also served several times on program committees of international conferences, and was the PC co-chair of WINE ’15 (the 11th Conference on Web and Internet Economics). He is currently a management committee member of the EU COST action IC1205 on Computational Social Choice and served in the past as a member of the COST Action IC0602 on Algorithmic Decision Theory. He is also a member of the European Association of Theoretical Computer Science (EATCS) and has participated in basic research projects funded either by the EU or by the Greek State.
 
 
Decision Making for Artificial Intelligence
Paolo Turrini, Department of Computing, Imperial College London
 
Abstract
Deep Mind’s Alpha Go, the Go-playing engine that defeated the World Go Champion Lee Sedol in March 2016, is one of the biggest achievements of Artificial Intelligence. In a nutshell, Alpha Go is a decision-making agent that takes decisions in an uncertain environment, exploring the potential consequences of their own choices using complex estimates of the world around.
This course, based on Russel and Norvig’s classic AI textbook, is an introductory exploration of the most fundamental aspects of agent-based decision-making which are relevant to AI practice. Agents are mathematical entities which live in an unknown environment and are guided towards the realisation of personal objectives, e.g., reaching a given terminal state. An agent is endowed with a representation of the environment and an action repertoire at each state. The environment is unknown, stochastic, and evolves following some rules that might be unknown to the agent, as well. The task is to take the best possible decision that can be taken given the (incomplete) information available. This simple model is the basis of a number of important achievements in AI, and combines the use of logical reasoning, probabilistic reasoning and sequential decision-making.
 

Topics Covered:

The 3.5 hours will be structured into four parts, of roughly 50 minutes each.
In Part 1 I will introduce the concept of decision-making agent, which will be equipped with the capacity to make probabilistic inferences about the world around. I will recall the basics of probabilistic reasoning and add a treatment of utility and how to incorporate them in decision-making, focussing on the principle of maximisation of expected utility, revisiting the examples and drawing conclusions on the best actions to take.
Part 2 will introduce sequential decision-making, focussing on stochastic actions, i.e., actions upon which the agent has no full control. I will show how to calculate the expected utility of a plan, introducing the agent’s value of time (or patience), i.e., how much they value a reward today with respect to the same reward tomorrow.
Part 2 will introduce sequential decision-making, focussing on stochastic actions, i.e., actions upon which the agent has no full control. I will show how to calculate the expected utility of a plan, introducing the agent’s value of time (or patience), i.e., how much they value a reward today with respect to the same reward tomorrow.
Part 3 will introduce Markov Decision Processes (MDPs), a widely used framework to reason about decisions in time. MDPs will be systems in which agents will have to take decisions in time, calculating the effect of policies, choices of stochastic actions at each decision point, given their patience. Policies’ properties will be discussed, together with a method for finding the optimal one: the Value Iteration Algorithm.
Finally, Part 4 will introduce the very basics of reinforcement learning, discussing how the agent can use probabilistic inference to take decisions in an environment that can only be partially explored.
 
The main reference is the classic AI textbook by Russell and Norvig.
 

Level and Prerequisites
Introductory.

Tutor’s Bio
I received my PhD from Utrecht University in 2011, with a thesis on Logic and Games. In 2011 I moved to the Computer Science and Communication Unit of the University of Luxembourg, where I was awarded a COFUND Marie Curie fellowship, on a project on Cooperation and Multi Agent Systems. In 2013 I moved to the Department of Computing, Imperial College London, where I was awarded a Marie Curie IEF fellowship, on a project on norms and the regulation of Multi Agent Systems. In 2015 I was awarded an Imperial College Research Fellowship, on a project on negotiation and distributed Artificial Intelligence.
Overall, I have published 10 journal papers, 14 conference papers, five workshop papers and one book chapter in the field of Artificial Intelligence, together with a forth- coming entry on Logic and Games Theory in the Stanford Encyclopedia of Philosophy.
 
 
Dynamic epistemic logic and its applications to plan/protocol synthesis.
Sophie Pinchinat, Université de Rennes 1
François Schwarzentruber, École normale supérieure de Rennes
 
Abstract

We introduce dynamic epistemic logic (DEL) and its natural extension with fix-points and describe how it naturally captures information change triggered by the occurrence events in a multi-agent system. The usefulness of this logic is illustrated by many examples supported by a demo tool called “Hintikka’s world (people.irisa.fr/Francois.Schwarzentruber/hintikkasworld/), that the students will also use. In particular, we will show how so-called planning and protocol synthesis problems, widely investigated in the literature, can be expressed in DEL. We will also present a symbolic representation of DEL models.Then, the two classic decision problems of satisfiability and model-checking of DEL are investigated. (a) In a first stage, the fix-point-free fragment of the logic is considered. (b) In a second stage, the decidability of fix-points formulas for safety and reachability properties is investigated with two main tools. The first tool relies on the expressive power of DEL to describe the executions of powerful machines (such as Turing machines, pushdown automata, queue automata). The second tool stems from an accurate analysis of the infinite structure induced by the model-checked formula and their connections with well-known classes of infinite structures, typically the one of automatic structures by Blumensath and Grädel. 

Topics Covered:

  • Dynamic epistemic logic.
  • Decision problems for DEL without fix-points: model-checking and satisfiability.
  • Decision problems for DEL with fix-points: safety and reachability properties.
  • Dynamic epistemic structures, automatic structures.
  • Application to plan/protocol synthesis.
Level and Prerequisites
Prerequisites for attendees: basics in propositional and first-order logic, in finite-word automata theory, in computational complexity.
 
Tutor’s Bio
Sophie Pinchinat is a full professor at the University of Rennes 1 and a collaborator of ENS Rennes. She is the head of the research team LogicA (Logic and Applications) at the IRISA/INRIA Laboratory. Her research interests range between logic, automata and games; imperfect information, knowledge, uncertainty situations; synthesis problems; supervisory control theory. She has a long term experience in teaching logic, automata and games for the verification and the synthesis of interacting systems. She serves the editorial board of the international journal Discrete-event Dynamical Systems and the Program Committees of several international conferences such as CSL, FSTTCS, TARK, GandALF. She also often reviews articles submitted to Studia Logica, Fundamentae Informatica, Information and Computation and IEEE Transactions on Automatic Control. See http://people.irisa.fr/Sophie.Pinchinat/ for further details.
 
François Schwarzentruber is associate professor at ENS Rennes (France). His current research interests are mainly focused on theory and applications of logic to artificial intelligence, agency and multi-agent systems and computer science. He has been a PC member of some editions in that topics such as AAMAS and IJCAI. He was reviewer for journals such as Synthese, Studia Logica, Theoretical Computer Science. Since 2011, his research mainly focuses on studying dynamic epistemic logic. See http://people.irisa.fr/Francois.Schwarzentruber/ for further details.
 
 
Engineering Machine Ethics
Marija Slavkovik, University of Bergen
 
Abstract
Abstract Machine ethics is a new rapidly developing discipline in Artificial Intelligence (AI). It is concerned with various aspects of the problem of endowing AI systems with ethical reasoning capabilities. A lot of the early work of machine ethics is focused on purely philosophical discussions such as whether an AI system can ever truly be a moral agent. More recently we see the rise of attempts to actually develop limited but concrete methods that enable an intelligent agent to reason ethically. The discussions are also becoming more focused on practical issues, such as how to formalise, validate, verify and modify the ethicity of an artificial intelligent agent. Furthermore broader questions of how to build regulatory structures that address (un)ethical machine behaviour also tackled. The aim of this tutorial is to acquaint the participants with the state of the art in the field machine ethics. The focus of the tutorial is engineering machine ethics – assuming an agreement on what behaviour is ethical for an artificial agent how do we “translate” it into an algorithm for ethical reasoning. After this tutorial the participants are expected to be able to read and understand the literature in machine ethics and have a grasp on the pertinent open questions in machine ethics research.
 

Topics Covered:

  • A very brief introduction to moral philosophy.
  • Introduction to machine ethics, motivation and scope.
  • Overview of the Moor classification of the types of artificial moral agents.
  • Top-down and bottom up approaches to engineering ethical reasoning with examples of implemented systems.
  • The context impact of machine ethics.
  • The dark side of machine ethics, building deliberately unethical systems.
Level and Prerequisites
The target audience are graduate students and master students interested in doing machine ethics research in general and implementing ethical reasoning for artificial agents in particular. Some basic understanding of agent reasoning and machine learning is desirable but not necessary.
 
Tutor’s Bio
Marija Slavkovik is a postdoctoral fellow at the University of Bergen since 2013 where she conducts research on collective reasoning and decision making methods for artificial intelligence. Marija obtained her doctoral degree from the University of Luxembourg with a dissertation on “Judgment Aggregation for Multi-Agent Systems” in 2012, after which she spent a year as a postdoc in the Department of Computer Science at the University of Liverpool. It is there that she got involved with machine ethics project. She has since been very active in building a network of machine ethics researchers interested in this area. She is in the program committees of, among others, ECAI’17, IJCAI’17, AAAI’17, LOFT’17. She was the student session chair of ESSLLI’10 and a coordinator for the Dagstuhl seminar 16222 on Judgment Aggregation for Artificial Intelligence.
 
 
Game Theory
Ulle Endriss, ILLC, University of Amsterdam
 
Abstract
This tutorial will introduce some of the key concepts of game theory and discuss how they may be used to model autonomous agents interacting with each other in a multiagent system. Topics to be covered include strategic games in normal form and mechanism design. Links to some of the more advanced tutorials on closely related topics will be provided.
 

Topics Covered:

  • Strategic Games
    • normal-form games, pure and mixed strategies
    • solution concepts, particularly Nash equilibrium
    • extensions and variants (e.g., congestion games, Bayesian games)
  • Mechanism Design
    • basic auction theory
    • Vickrey-Clarke-Groves mechanisms
Level and Prerequisites
This is an introductory tutorial that will be accessible to all EASSS participants.
 
Tutor’s Bio
Ulle Endriss is Associate Professor of Artificial Intelligence at the Institute for Logic, Language and Computation (ILLC) at the University of Amsterdam. His research concerns the design and analysis of methods for collective decision making. He regularly publishes in the leading conferences and journals in his field, received the Best Paper Award at AAMAS on two occasions, and is an editor of the Handbook of Computational Social Choice published in 2016. He currently serves as an elected member of the Board of Directors of IFAAMAS as well as an Associate Editor of the journals JAAMAS, AIJ and JAIR.
 
 
Introduction to Judgment Aggregation
Umberto Grandi, IRIT, University of Toulouse
 
Abstract
What happens when a group of agents needs to take a collective decision on multiple correlated issues? Or when they need to elect a committee respecting various constraints about the distribution of power? In these and similar cases the standard models of voting need to be enriched with more complex models of individual opinions. Judgment aggregation is a general setting in which individual agents express their views on sets of propositional formulas. In this tutorial I will present two frameworks for judgment aggregation and showcase the main computational challenges that arise in this setting.
 

Topics Covered:

  • Short introduction to computational social choice. Historical introduction to judgment aggregation : from legal theory, to social choice theory, to multi-agent systems
  • A unified setting for paradoxes of aggregation: binary aggregation with integrity constraints
  • Mini-tutorial on computational complexity (depending on the audience)
  • Computational complexity of winner determination for judgement aggregation rules
  • Impossibility results and winning coalitions in formula-based judgment aggregation
  • Strategic manipulation: where do preferences come from, individual strategic voting, external control and lobbying in multiple referenda
  • Overview of applications and related problems: collective annotation for corpora, crowdsourcing, multiagent argumentation, opinion diffusion on networks
Level and Prerequisites
No strict requirement, but this is an advanced course. A good mastery of mathematical proofs and formal modeling will be expected. Familiarity with basic notions in computational complexity (polynomial reductions, NP-completeness) and propositional logic (basic definitions) will be useful. A mini-tutorial on computational complexity will be provided during the first lecture.
 
Tutor’s Bio
Umberto Grandi is assistant professor at the University of Toulouse 1 Capitole, and a member of the research group in Logic, Interaction, Language and Computation at the Institut de Recherche en Informatique de Toulouse (IRIT) since 2014. He obtained his PhD at the University of Amsterdam in 2012 under the supervision of Ulle Endriss, and he joined IRIT after a two-year postdoc at the University of Padova. His research interests are in computational social choice, more precisely the aggregation of judgments on logically correlated issues, game-theoretical studies of collective choice, and opinion diffusion on social networks.
 
 
Modelling and Analysis of Cyber-Social Systems
Christian W. Probst, Technical University of Denmark
 
Abstract
Computer systems have evolved from standalone systems, over networked sys- tems, to cyber-physical systems. In all stages, human operators have been essential for the functioning of the system and for understanding system messages. Recent trends make human actors an even more central part of computer systems, resulting in what we call “cyber-social systems”. In cyber-social systems, human actors and their interaction with a system are essential for the state of the system and its functioning. Both the system’s operation and the human’s operating it are based on an assumption of each other’s behaviour. Consequently, an assessment of the state of a system must take the human ac- tors and these interactions into account. However, human behaviour is difficult to model at best. In this tutorial, we explore the modelling and analysis of cyber-social systems, and how to account for the interaction between human actors and computer systems. We also investigate, how behavioural models can be used to profile actor behaviour either online or in simulations to understand the potential motivation and to test hypotheses.
 

Topics Covered:

  • Socio-technical systems.
  • Modelling human behaviour.
  • Modelling Cyber-social systems.
  • Analysing Cyber-social systems.
Level and Prerequisites
MSc and Phd students.
 
Tutor’s Bio
Christian W Probst is an associate professor and head of the section for Cyber Security at the Department of Applied Mathematics and Computer Science at the Technical University of Denmark. Christian’s research interests are in the field of IT and organisational security and he has been the technical lead of the TREsPASS project. He has developed the ExASyM model for socio-technical systems, which enables risk assessment of these systems and, eg, the identification of insider threats. Christian’s current research investigates risk assessment for cyber-social systems, where human actors are an integral part of the system.
 
 
Multi-Agent Reinforcement Learning and Dynamics of Learning
Daan Bloembergen,
University of Liverpool
 
Abstract
In this tutorial we will discuss how reinforcement learning can be applied in a multi-agent setting (MARL). The presence of other learning agents gives rise to partial observability and a non-stationary environment, violating the assumptions on which traditional RL algorithms are based. These key challenges motivate the need for a new theoretical framework to study MARL. In particular, we discuss Markov games as a multi-agent extension of MDPs, as well as three approaches to learning in this extended setting: independent learning, joint action learning, and gradient based methods. In addition, we discuss evolutionary game theory (EGT) as a framework within which to study multi-agent learning dynamics formally. We discuss (normal form) games, best response sets, Nash equilibria, Pareto optimality, replicator dynamics, and evolutionarily stable strategies. The replicator dynamics are formally linked to MARL and form the basis of the evolutionary framework for studying learning dynamics of MARL. Several examples of the application of this framework to study MARL are presented.
 

Topics Covered:

  • Reinforcement learning: Markov decision processes, value iteration, temporal difference methods, Q-learning, learning automata.
  • Multi-agent learning: Markov games, independent learning, joint-action learning, gradient ascent approaches.
  • Multi-agent learning dynamics: normal-form games, Nash equilibrium, evolutionary game theory, replicator dynamics, evolutionarily stable strategies.
Level and Prerequisites
The tutorial is aimed at the postgraduate level, either Master or PhD. Basic knowledge of linear algebra and calculus is assumed. An introduction to reinforcement learning and (evolutionary) game theory is included, although background knowledge on these topics will help. The tutorial is of interest to students and early career researchers who are interested in applying reinforcement learning in a multi-agent setting, and wish to study and understand the resulting learning dynamics. In addition, the tutorial will be suited to anyone wishing to get an up-to-date overview of the field of multi-agent reinforcement learning.
 
Tutor’s Bio
Daan Bloembergen is a postdoctoral research associate at the University of Liverpool, UK. He graduated from Maastricht University (NL) with a M.Sc. in Artificial Intelligence in 2010 for which he earned the honor cum laude. He received his Ph.D. in May 2015, also from Maastricht University, where he worked on the project “Analyzing Multiagent Learning”, funded by the NWO (Netherlands Organisation for Scientific Research). His current research interests are studying the learning dynamics of multi-agent systems in general and multi-agent reinforcement learning and the relation to evolutionary game theory in particular. He has published on this topic in several journals and conferences among which JAIR, AAMAS, ECAI, and AAAI.
 
 
Predicting Human Decision-Making: Tools of the Trade (EurAI lecture)
Ariel Rosenfeld, Department of Computer Science, Bar-Ilan University, Israel
 
Abstract
Human decision-making often transcends our formal models of “rationality”. Designing intelligent agents that interact proficiently with people necessitates the modeling of human behavior and the prediction of their decisions. In this 3.5 hour tutorial, we will focus on the prediction of human decision-making and its use in designing intelligent human-aware automated agents of varying natures; from purely conflicting interaction settings (e.g., security and games) to fully cooperative interaction settings (e.g., advice provision, human rehabilitation). We will present computational representations, algorithms and empirical methodologies for meeting the challenges that arise from the above tasks in both a single interaction (one-shot) and repeated interaction settings. The tutorial will also review recent advances, current challenges and future directions for the field.
 

Topics Covered:

  • Motivation and examples: Agents for human rehabilitation, human-robot inter- action, automated advice provision, argumentation, security, negotiation etc.
  • The basics of human decision-making: decision theory and game theory, bounded rationality, historical and contemporary computational models of human behavior from behavioral economics, cognitive psychology and AI. Experimental evidence for human decision-making and behavior and they are different from what we usually consider to be rational.
  • Tools of the trade: Normative vs. descriptive approaches for predicting human decision-making, computational models and the integration of normative theories from different disciplines (e.g., social science and economics) to enhance classic prediction methods.
  • From prediction to action: combining human decision-making prediction methods in the design of intelligent agents that interact proficiently with people. Frameworks, methodologies and applications to security, games, argumentation, advice provision, rehabilitation, human-robot interaction, personal assistants and negotiation.
  • Additional topics and challenges: implicit vs. explicit interaction settings, enhancing prediction capabilities using additional modalities (e.g., facial expressions), transfer learning of decision policies across domains and people, the complexity of acquiring (reliable) human data, minority cases in human-generated data.
Tutor’s Bio
Ariel Rosenfeld is currently completing his PhD in Computer Science at Bar-Ilan University, Israel. He obtained a BSc in Computer Science and Economics, graduated ‘magna cum laude’, from Tel-Aviv University, Israel. Rosenfeld’s research focus is Human-Agent Interaction and he has published on the topic at top venues such as AAAI, IJCAI, AAMAS and ECAI. Rosenfeld has a rich lecturing background, span- ning over a decade, and he is currently acting as a lecturer at Bar-Ilan University and in Ben-Gurion University, Israel. 
 
 
Probabilistic Model Checking
Marta Kwiatkowska, Department of Computer Science, University of Oxford
 
Abstract
Probabilistic model checking is a formal verification technique for the automated analysis of systems that exhibit stochastic behaviour. Such behaviour may be due to, for example, component failure, randomisation, or uncertainty on the frequency of event occurrences. The models typically constitute probabilistic extensions of transition systems (e.g. Markov chains, Markov decision processes, probabilistic timed automata and stochastic games) which can be obtained from a variety of modelling notations, including imperative programs with random assignment (such as C programs extended with assignment of the value obtained by tossing a fair coin). Properties are specified in probabilistic extensions of temporal logic (CTL, LTL and ATL), and verification reduces to the computation of the probability or expectation of the probability being satisfied. For systems with nondeterministic choice, one can also construct optimal controllers that ensure the satisfaction of an objective specified by multiple properties. The techniques have been implemented in PRISM (www.prismmodelchecker.org) and enable a range of quantitative analyses of probabilistic models against specifications such as the worst-case probability of failure within 10 seconds or the minimum expected power consumption over all possible schedulings, as well as synthesis of strategies that simultaneously minimise energy consumption and maximise the probability of termination.
 

Topics Covered:

  • Discrete-time Markov chains. Probabilistic logic PCTL with rewards. PCTL model checking for Markov chains. Case study: Bluetooth protocol.
  • Markov decision processes (MDPs), PCTL model checking for MDPs. Strategy synthesis from PCTL. Case study: Firewire protocol.
  • Automata-theoretic techniques for model checking. Buchi and Rabin automata.LTL model checking for DTMCs and MDPs.
  • Multiobjective LTL model checking for MDPs. Multiobjective strategy synthesis. Stochastic games. Case study: Robot planning.
 
 
Specification and Verification of Multi-Agent Systems
Wojciech Penczek, Polish Academy of Sciences and University of Podlasie, Poland
Michał Knapik, Polish Academy of Sciences, Poland
 
Abstract
In this course, we present some important developments in the area of logic-based verification for multi-agent systems (MAS). We start with an introduction to modal logics of time and knowledge, that provides the conceptual foundation. Then, we present specification languages, verification techniques and complexity results for model checking of epistemic, temporal, and strategic logics, plus some of their combinations. Due to time constraints, we focus mainly on standard model checking in explicit models.
The course aims at enhancing students knowledge and skills connecting logical theories of computational systems with practical applications.
 

Topics Covered:

  • Introduction to specification and verification of MAS
    • Modal logic: modal operators, possible worlds semantics. Modal logic as a generic framework to model and reason about MAS. Reasoning about knowledge: basic multi-agent epistemic logic. Common knowledge and distributed knowledge. Decision problems. Formal verification by model checking. Some complexity classes. Model checking epistemic properties. Example domains: robots on a rescue mission, security of e-voting.
  • Reasoning about dynamics of systems
    • Temporal logic. Linear vs. branching time. Linear time logic: LTL. Branching-time logic: CTL. Specification of dynamics: robots, voting. Combining modalities: logics of knowledge and time. CTL+Knowledge. Standard non-symbolic algorithms of model checking and basic complexity results.
  • Verification of strategic abilities. Tools and experimental results.
    • Temporal logic meets game theory: alternating-time temporal logic ATL. Model checking ATL for explicit models. Tools: VERICS, MCMAS.
Level and Prerequisites
This is an introductory course and the participants are only expected to have basic background in logic. Basic knowledge of modal logic will be an advantage but it is not necessary.
 
Tutor’s Bio
Wojciech Penczek is a full professor and the head of the Theory of Distributed Systems Group at the Polish Academy of Sciences. He was a research fellow at the Technical University of Eindhoven and Manchester University; he also worked as a consultant at AT&T. His research is now focused on verification methods for (timed) distributed and multi-agent systems. He has been a project leader of the EC-founded project CRIT-2 and played a main role in several national projects. He has chaired and was a PC member of many conferences on Computer Science. He is the author of more than 100 conference and journal articles on temporal logic, verification methods, model checking, and formal theories of multi-agent systems. His teaching record, besides numerous regular undergraduate courses, includes one course at ESSLLI (2010) and two courses at EASSS (European Agent Systems Summer School).
 
Michał Knapik is a postdoctoral researcher at the Polish Academy of Sciences. His research is focused on modeling, specification, and verification on multi-agent systems. His teaching experi- ence includes undergraduate courses at the Polish-Japanese IT Academy in Warsaw, Poland.