News

  • 08 May 2020: The ALA presentations are now available on underline!
  • 4 May 2020: The preliminary program for the live events is now online! The ALA live sessions will be streamed on Twitch!
  • 27 April 2020: We are happy to announce our invited speakers for this year, Diederik M. Roijers and Jakob Foerster!
  • 27 April 2020: We invite all authors and participants to join our Slack workspace! Check out the program for more details.
  • 15 April 2020: We are happy to announce that ALA will take place this year as a virtual workshop. The content will consist of a mix of pre-recorded contributions together with live Q&A sessions and invited speaker presentations.
  • 18 March 2020: AAMAS has decided to move to a virtual only conference this year. We will delay the paper notifications until we will receive further news and instructions on how AAMAS decides to organise the workshops under these circumstances. We will make sure to provide all the information before April 1.
  • 26 February 2020: Submissions are now closed. We received 45 submissions this year!
  • 5 February 2020: The submission deadline has been extended to 24 February 2020 23:59 UTC!
  • 4 February 2020: Program Committee members added
  • 21 January 2020: We recommend authors to also append reviews received for AAMAS submissions
  • 21 November 2019: ALA 2020 site launched

ALA 2020 - Workshop at AAMAS 2020

Adaptive and Learning Agents (ALA) encompasses diverse fields such as Computer Science, Software Engineering, Biology, as well as Cognitive and Social Sciences. The ALA workshop will focus on agents and multiagent systems which employ learning or adaptation.

This workshop is a continuation of the long running AAMAS series of workshops on adaptive agents, now in its twelfth year. Previous editions of this workshop may be found at the following urls:

The goal of this workshop is to increase awareness of and interest in adaptive agent research, encourage collaboration and give a representative overview of current research in the area of adaptive and learning agents and multi-agent systems. It aims at bringing together not only scientists from different areas of computer science (e.g. agent architectures, reinforcement learning, evolutionary algorithms) but also from different fields studying similar concepts (e.g. game theory, bio-inspired control, mechanism design).

The workshop will serve as an inclusive forum for the discussion of ongoing or completed work covering both theoretical and practical aspects of adaptive and learning agents and multi-agent systems.

This workshop will focus on all aspects of adaptive and learning agents and multi-agent systems with a particular amphasis on how to modify established learning techniques and/or create new learning paradigms to address the many challenges presented by complex real-world problems. The topics of interest include but are not limited to:

  • Novel combinations of reinforcement and supervised learning approaches
  • Integrated learning approaches that work with other agent reasoning modules like negotiation, trust models, coordination, etc.
  • Supervised multi-agent learning
  • Reinforcement learning (single- and multi-agent)
  • Novel deep learning approaches for adaptive single- and multi-agent systems
  • Multi-objective optimisation in single- and multi-agent systems
  • Planning (single- and multi-agent)
  • Reasoning (single- and multi-agent)
  • Distributed learning
  • Adaptation and learning in dynamic environments
  • Evolution of agents in complex environments
  • Co-evolution of agents in a multi-agent setting
  • Cooperative exploration and learning to cooperate and collaborate
  • Learning trust and reputation
  • Communication restrictions and their impact on multi-agent coordination
  • Design of reward structure and fitness measures for coordination
  • Scaling learning techniques to large systems of learning and adaptive agents
  • Emergent behaviour in adaptive multi-agent systems
  • Game theoretical analysis of adaptive multi-agent systems
  • Neuro-control in multi-agent systems
  • Bio-inspired multi-agent systems
  • Applications of adaptive and learning agents and multi-agent systems to real world complex systems

Extended and revised versions of papers presented at the workshop will be eligible for inclusion in a journal special issue (see below).

Important Dates

  • Submission Deadline: 10 February 2020   extended to 24 February 2020 23:59 UTC
  • Notification of acceptance: 10 March 2020
  • AAMAS has decided to move to a virtual only conference this year. We will delay the paper notifications until we will receive further news and instructions on how AAMAS decides to organise the workshops under these circumstances. We will make sure to provide all the information before April 1.

  • Camera-ready copies: 24 March 2020   extended to 20 April 2020
  • Workshop: 9 - 10 May 2020

Submission Details

Papers can be submitted through EasyChair.

We invite submission of original work, up to 8 pages in length (excluding references) in the ACM proceedings format (i.e. following the AAMAS formatting instructions). This includes work that has been accepted as a poster/extended abstract at AAMAS 2020. Additionally, we welcome submission of preliminary results, i.e. work-in-progress, as well as visionary outlook papers that lay out directions for future research in a specific area, both up to 6 pages in length, although shorter papers are very much welcome, and will not be judged differently. Finally, we also accept recently published journal papers in the form of a 2 page abstract.

Furthermore, for submissions that were rejected or accepted as extended abstracts at AAMAS, we encourage authors to also append the received reviews. This is simply a recommendation and it is optional. Authors can also include a short note or changelist they carried out on the paper. The reviews can be appended at the end of the submission file and do not count towards the page limit.

All submissions will be peer-reviewed (single-blind). Accepted work will be allocated time for poster and possibly oral presentation during the workshop. Extended versions of original papers presented at the workshop will also be eligible for inclusion in a post-proceedings journal special issue.

Journal Special Issue

We are delighted to announce that extended versions of all original contributions at ALA 2020 will be eligible for inclusion in a special issue of the Springer journal Neural Computing and Applications (Impact Factor 4.213). The deadline for submitting extended papers will be 15 September 2020.

NCA

We will post further details about the submission process and expected publication timeline here after the workshop.

Program

Except for the invited talks, ALA will take place in an asynchronous manner. In order to facilitate discussions over all the contributions as well as social interactions, we invite all authors and participants to join our Slack workspace.

In order to organise discussions, we ask authors/participants to create channels with the name paper-#. We have added below a unique number for each contribution.

The ALA live sessions will be streamed on Twitch.

Saturday 9 May

15:45 - 16:00 UTC Welcome & Opening Remarks
16:00 - 17:00 UTC Invited Talk: Jakob N. Foerster
Self-Play and Zero-Shot Coordination in Hanabi
17:00 - 18:00 UTC Discussion Panel
Topic: Building an AI syllabus
Chair: Diederik M. Roijers (HU University of Applied Sciences Utrecht, Vrije Universiteit Brussel)
Panelists:

Sunday 10 May

09:00 - 10:00 UTC Invited Talk: Diederik M. Roijers
Multi-objective decision making: why, how, and what now?
10:00 - 10:30 UTC Awards, closing remarks and ALA 2021
Best Paper Award:
Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie and Jimmy Ba,
Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

Accepted Papers

Long Talks

Paper # Authors Title
13Ganesh Ghalme, Swapnil Dhamal, Shweta Jain, Sujit Gujar and Y Narahari Ballooning Multi-Armed Bandits
17Lisa Torrey Reinforcement Learning via Reasoning from Demonstration
18Daniel Willemsen, Hendrik Baier and Michael Kaisers Value targets in off-policy AlphaZero: a new greedy backup
19Pieter Libin, Arno Moonens, Timothy Verstraeten, Fabian Perez-Sanjines, Niel Hens, Philippe Lemey and Ann Nowé Deep reinforcement learning for large-scale epidemic control
20Timothy Verstraeten, Eugenio Bargiacchi, Pieter Libin, Jan Helsen, Diederik Roijers and Ann Nowé Thompson Sampling for Loosely-Coupled Multi-Agent Systems: An Application to Wind Farm Control
23João Vitor de Oliveira Barbosa, Francisco C. Santos, Francisco S. Melo, Anna Helena Reali Costa and Jaime Simão Sichman Emergence of Cooperation in N-Person Dilemmas through Actor-Critic Reinforcement Learning
24Panayiotis Danassis and Boi Faltings Learning to Persist or Switch: Efficient and Fair Allocations in Large-scale Multi-agent Systems
25Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie and Jimmy Ba Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
28Peter Vamplew, Cameron Foale and Richard Dazeley A Demonstration of Issues with Value-Based Multiobjective Reinforcement Learning Under Stochastic State Transitions
38Grigory Neustroev, Canmanie Ponnambalam, Mathijs de Weerdt and Matthijs Spaan Interval Q-Learning: Balancing Deep and Wide Exploration
41Yunshu Du, Garrett Warnell, Assefaw Gebremedhin, Peter Stone and Matthew E. Taylor Work-in-progress: Corrected Self Imitation learning via Demonstrations
45Aly Ibrahim, Anirudha Jitani, Daoud Piracha and Doina Precup Reward Redistribution Mechanisms in Multi-agent Reinforcement Learning

Short Talks

Paper # Authors Title
1Abhik Singla, Sindhu Padakandla and Shalabh Bhatnagar Memory-based Deep Reinforcement Learning Method for Obstacle Avoidance in UAV
5Hardik Meisheri, Vinita Baniwal, Nazneen N Sultana, Balaraman Ravindran and Harshad Khadilkar Using Reinforcement Learning for a Large Variable-Dimensional Inventory Management Problem
8Zerong Xi and Gita Sukthankar Learning Correlation Functions on Mixed Data Sequences for Computer Architecture Applications
14Swapnil Dhamal, Walid Ben-Ameur, Tijani Chahed and Eitan Altman A Two Phase Investment Game for Competitive Opinion Dynamics in Social Networks
22Xiangyu Liu and Ying Tan Feudal Latent Space Exploration for Coordinated Multi-agent Reinforcement Learning
27Jiachen Yang, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes and Hongyuan Zha Learning to Incentivize Other Learning Agents
29Rohit Prasad, Harshad Khadilkar and Shivaram Kalyanakrishnan Optimising a Real-time Scheduler for Railway Lines using Policy Search
30Paniz Behboudian, Yash Satsangi, Matthew Taylor, Anna Harutyunyan and Michael Bowling Useful Policy Invariant Shaping from Arbitrary Advice
32Yijie Zhang, Roxana Radulescu, Patrick Mannion, Diederik M. Roijers and Ann Nowé Opponent Modelling using Policy Reconstruction for Multi-Objective Normal Form Games
33Abhinav Gupta, Agnieszka Słowik, William L. Hamilton, Mateja Jamnik, Sean B. Holden and Christopher Pal Analyzing structural priors in multi-agent communication
34Hang Xu, Ridhima Bector and Zinovi Rabinovich Teaching Multiple Learning Agents by Environment-Dynamics Tweaks
39Michael Sullins and Ian Kash Increased Optimism in Multi-Agent Policy Gradients
43Finbarr Timbers, Edward Lockhart, Martin Schmid, Marc Lanctot and Michael Bowling Approximate exploitability: Learning a best response in large games
44Arjun Manoharan, Rahul Ramesh and Balaraman Ravindran Option Encoder: A Framework for Discovering a Policy Basis in Reinforcement Learning

Spotlight

Paper # Authors Title
3Budi Kurniawan, Peter Vamplew, Michael Papasimeon, Richard Dazeley and Cameron Foale Discrete-to-Deep Supervised Policy Learning: An effective training method for neural reinforcement learning
9Saloni Laddha and Shrisha Rao Dynamic Interactions by Strong Influencers in Social Networks Using Opinion Propagation
11Shripad Salsingikar and Narayan Rangaraj Reinforcement Learning for Train Movement Planning at Railway Stations
15Wolfram Barfuss Infinite population evolutionary dynamics match infinite memory reinforcement learning dynamics
16Craig Sherstan, Bilal Kartal, Pablo Hernandez-Leal and Matthew E. Taylor Work in Progress: Temporally Extended Auxiliary Tasks
31Conor F Hayes, Enda Howley and Patrick Mannion Dynamic Thresholded Lexicographic Ordering
35Tapan Shah State Aware Principal Action Space Embedding for Centralized MARL
36Thomy Phan, Lenz Belzner, Kyrill Schmid, Thomas Gabor, Fabian Ritz, Sebastian Feld and Claudia Linnhoff-Popien A Distributed Policy Iteration Scheme for Cooperative Multi-Agent Policy Approximation

Invited Talks

Diederik M. Roijers

Affiliation: HU University of Applied Sciences Utrecht, Vrije Universiteit Brussel

Website: http://roijers.info

Talk Title: Multi-objective decision making: why, how, and what now?

Abstract: In his invited talk at ALA, Diederik will be discussing: multi-objective decision making. Firstly, he'll focus on why one should model decision problems as explicitly multi-objective, from a practical, as well as a mathematical, and an ethical perspective. Secondly, he'll discuss some lessons learned from his work in multi-objective problems, in the form of tips and tricks for getting started with multiple objectives. To conclude he'll go into the potential, and what he believes are the major open problems in multi-objective decision-making research.

Bio: Diederik M. Roijers believes that real-world decision problems have multiple objectives. (And he believes that he is not alone in thinking so [1,2].) Therefore, he has dedicated a large chunk of his scientific career to investigating multi-objective models and methods for planning and reinforcement learning. He obtained his PhD at the University of Amsterdam on "Multi-Objective Decision-Theoretic Planning", investigated social robots with social and task objectives at the University of Oxford, multi-objective reinforcement learning at the Vrije Universiteit Brussel. And, after being an assistant professor at the VU, he became and still is a senior lecturer in Technical Computer Science and researcher in the Microsystems Technology group at HU University of Applied Sciences Utrecht, and a member of the AI research group at the Vrije Universiteit Brussel. More about Diederik, but more importantly his research, can be found at his website: http://roijers.info

[1] Bryce, Daniel, William Cushing, and Subbarao Kambhampati. "Probabilistic planning is multi-objective." Arizona State University, Tech. Rep. ASU-CSE-07-006 (2007).

[2] Vamplew, P., Dazeley, R., Foale, C., Firmin, S., & Mummery, J. (2018). Human-aligned artificial intelligence is a multiobjective problem. Ethics and Information Technology, 20(1), 27-40.

Jakob N. Foerster

Affiliation: Facebook AI Research / University of Toronto & Vector Institute (incoming)

Website: https://www.jakobfoerster.com

Talk Title: Self-Play and Zero-Shot Coordination in Hanabi

Abstract: In recent years we have seen fast progress on a number of zero-sum benchmark problems in AI, e.g. Go, Poker and Dota. In contrast, success in the real world requires humans to collaborate and communicate with others, in settings that are, at least partially, cooperative. Recently, the card game Hanabi has been established as a new benchmark environment to fill this gap. In particular, Hanabi is interesting to humans since it is entirely focused on theory of mind, i.e., the ability to reason over the intentions, beliefs and point of view of other agents when observing their actions. This is particularly important in applications such as communication, assistive technologies and autonomous driving. In this talk we provide an update on recent progress in this area. We start out with novel state-of-the-art methods for the self-play setting. Next, we introduce the Zero-Shot Coordination setting as a new frontier for multi-agent research and, finally, introduce Other-Play as a novel learning algorithm which allows agents to coordinate ad-hoc and biases learning towards more human compatible policies.

Bio: Jakob Foerster received a CIFAR AI chair in 2019 and is starting as an Assistant Professor at the University of Toronto and the Vector Institute in fall 2020. During his PhD at the University of Oxford, he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. He has since been working as a research scientist at Facebook AI Research in California, where he will continue advancing the field up to his move to Toronto. He was the lead organizer of the first Emergent Communication (EmeCom) workshop at NeurIPS in 2017, which he has helped organize ever since.

Programe Committee

  • Adrian Agogino, University of California, Santa Cruz, US
  • Anna Costa, University of São Paulo, BR
  • Arno Moonens, Vrije Universiteit Brussel, BE
  • Baoxiang Wang, The Chinese University of Hong Kong, HK
  • Bilal Kartal, Borealis AI, CA
  • Brent Harrison, Georgia Institute of Technology, USA
  • Conor Hayes, National University of Ireland Galway, IE
  • Craig Sherstan, University of Alberta, CA
  • Daan Bloembergen, Centrum Wiskunde & Informatica, NL
  • Daniel Hernandez, University of York, UK
  • Denis Steckelmacher, Vrije Universiteit Brussel, BE
  • Diederik Roijers, Vrije Universiteit Amsterdam, NE
  • Elias Fernandez, Vrije Universiteit Brussel, BE
  • Esther Colombini, University of Campinas, BR
  • Faraz Torabi, University of Texas at Austin, USA
  • Flávio Pinheiro, NOVA IMS, Universidade Nova de Lisboa, PT
  • Francisco Santos, Universidade de Lisboa, PT
  • Gabriel Ramos, Universidade do Vale do Rio dos Sinos, BR
  • Hélène Plisnier, Vrije Universiteit Brussel, BE
  • Ibrahim Sobh, Valeo, EG
  • Jen Jen Chung, Eidgenössische Technische Hochschule Zürich, CH
  • Jivko Sinapov, Tufts University, USA
  • Josiah Hanna, University of Edinburgh, UK
  • Julian Garcia, Monash University, AU
  • Karl Mason, Cardiff University, UK
  • Kleanthis Malialis, University of Cyprus, CY
  • Kyriakos Efthymiadis, Vrije Universiteit Brussel, BE
  • Luisa Zintgraf, University of Oxford, UK
  • Mari Kawakatsu, Princeton University, USA
  • Martin Repicky, National University of Ireland Galway, IE
  • Mathieu Reymond, Vrije Universiteit Brussel, BE
  • Miguel Vasco, INESC-ID and IST, University of Lisbon, PT
  • Pablo Hernandez-Leal, Borealis AI, CA
  • Paolo Turrini, University of Warwick, UK
  • Peter Vamplew, Federation University Australia, AU
  • Pieter Libin, Vrije Universiteit Brussel, BE
  • Ramona Merhej, INESC-ID and IST, University of Lisbon, PT
  • Raphael Cobe, Advanced Institute for AI, BR
  • Rodrigo Bonini, Federal University of ABC, BR
  • Roland Bouffanais, Singapore University of Technology and Design, SG
  • Ruben Glatt, Lawrence Livermore National Lab, US
  • Rui Silva, Carnegie Mellon University and Instituto Superior Técnico, USA/PT
  • Samuel Cho, Princeton University, USA
  • Shangtong Zhang, University of Oxford, UK
  • Thomas Moerland, Delft University of Technology, NE
  • Timothy Verstraeten, Vrije Universiteit Brussel, BE
  • Vinicius Carvalho, University of São Paulo, BR
  • Vitor Vasconcelos, Princeton University, USA
  • Wolfram Barfuss, Max-Planck-Institute for Mathematics in the Sciences, GE
  • Yunshu Du, Washington State University, USA

Organization

This year's workshop is organised by: Senior Steering Committee Members:
  • Enda Howley (National University of Ireland Galway, IE)
  • Daniel Kudenko (University of York, UK)
  • Patrick Mannion (National University of Ireland Galway, IE)
  • Ann Nowé (Vrije Universiteit Brussel, BE)
  • Sandip Sen (University of Tulsa, USA)
  • Peter Stone (University of Texas at Austin, USA)
  • Matthew Taylor (Washington State University, USA)
  • Kagan Tumer (Oregon State University, USA)
  • Karl Tuyls (University of Liverpool, UK)

Contact

If you have any questions about the ALA workshop, please contact the organizers at:
ala.workshop.2020 AT gmail.com

For more general news, discussion, collaboration and networking opportunities with others interested in Adaptive Learning Agents then please join our Linkedin Group