Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics2024, 13, 573.
Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics 2024, 13, 573.
Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics2024, 13, 573.
Domenech i Vila, M.; Gnatyshak, D.; Tormos, A.; Gimenez-Abalos , V.; Alvarez-Napagao, S. Explaining the Behaviour of Reinforcement Learning Agents in a Multi-Agent Cooperative Environment Using Policy Graphs. Electronics 2024, 13, 573.
Abstract
The adoption of algorithms based on Artificial Intelligence (AI) has been rapidly increasing during the last years. However, some aspects of AI techniques are under heavy scrutiny. For instance, in many use cases, it is not clear whether the decisions of an algorithm are well-informed and conforming to human understanding. Having ways to address these concerns is crucial in many domains, especially whenever humans and intelligent (physical or virtual) agents must cooperate in a shared environment. In this paper, we introduce an application of an explainability method based on the creation of a Policy Graph (PG) based on discrete predicates that represent and explain a trained agent’s behaviour in a multi-agent cooperative environment. We show that from these policy graphs, policies for surrogate interpretable agents can be automatically generated. These policies can be used to measure the reliability of the explanations enabled by the PGs, through a fair behavioural comparison between the original opaque agent and the surrogate one. The contributions of this paper represent the first application of policy graphs in the context of explaining agent behaviour in collaborative multi-agent scenarios and presents experimental results that sets this kind of scenario apart from previous application in single-agent scenarios: when requiring collaborative behaviour, predicates that allow representing observations about the other agents are crucial to replicate the opaque agent’s behaviour and increase the reliability of explanations.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.