In this paper we analyze the qualitative differences between evolutionary strategies and reinforcement learning algorithms by focusing on two popular state-of-the-art algorithms: the OpenAI-ES evolutionary strategy and the Proximal Policy Optimization (PPO) reinforcement learning algorithm – the most similar methods of the two families. We analyze how the methods differ with respect to: (i) general efficacy, (ii) ability to cope with rewards which are sparse in time, (iii) propensity/capacity to discover minimal solutions, (iv) dependency on reward shaping, and (v) ability to cope with variations of the environmental conditions. The analysis of the performance and of the behavioral strategies displayed by the agents trained with the two methods on benchmark problems enable us to demonstrate qualitative differences which were not identified in previous studies, to identify the relative weakness of the two methods, and to propose ways to ameliorate some of those weaknesses. We show that the characteristics of the reward function has a strong impact which vary qualitatively not only for the OpenAI-ES evolutionary algorithm and the PPO reinforcement learning algorithm but also for other reinforcement learning algorithms, thus demonstrating the importance of optimizing the characteristic of the reward function to the algorithm used.
Qualitative differences between evolutionary strategies and reinforcement learning methods for control of autonomous agents / Milano, N.; Nolfi, S.. - In: EVOLUTIONARY INTELLIGENCE. - ISSN 1864-5909. - (2024), pp. 1-12. [10.1007/s12065-022-00801-3]
Qualitative differences between evolutionary strategies and reinforcement learning methods for control of autonomous agents
Milano N.
;Nolfi S.
2024
Abstract
In this paper we analyze the qualitative differences between evolutionary strategies and reinforcement learning algorithms by focusing on two popular state-of-the-art algorithms: the OpenAI-ES evolutionary strategy and the Proximal Policy Optimization (PPO) reinforcement learning algorithm – the most similar methods of the two families. We analyze how the methods differ with respect to: (i) general efficacy, (ii) ability to cope with rewards which are sparse in time, (iii) propensity/capacity to discover minimal solutions, (iv) dependency on reward shaping, and (v) ability to cope with variations of the environmental conditions. The analysis of the performance and of the behavioral strategies displayed by the agents trained with the two methods on benchmark problems enable us to demonstrate qualitative differences which were not identified in previous studies, to identify the relative weakness of the two methods, and to propose ways to ameliorate some of those weaknesses. We show that the characteristics of the reward function has a strong impact which vary qualitatively not only for the OpenAI-ES evolutionary algorithm and the PPO reinforcement learning algorithm but also for other reinforcement learning algorithms, thus demonstrating the importance of optimizing the characteristic of the reward function to the algorithm used.File | Dimensione | Formato | |
---|---|---|---|
EI.pdf
accesso aperto
Licenza:
Dominio pubblico
Dimensione
1.89 MB
Formato
Adobe PDF
|
1.89 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.