TY - GEN
T1 - Unlearning Works Better Than You Think
T2 - 2025 Genetic and Evolutionary Computation Conference, GECCO 2025
AU - Lerasle, Matthieu
AU - Bendahi, Abderrahim
AU - Fradin, Adrien
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2025/7/13
Y1 - 2025/7/13
N2 - We introduce Local Reinforcement-Based Selection of Auxiliary Objectives (LRSAO), a novel approach that selects auxiliary objectives using reinforcement learning (RL) to support the optimization process of an evolutionary algorithm (EA) as in EA+RL framework and furthermore incorporates the ability to unlearn previously used objectives. By modifying the reward mechanism to penalize moves that do no increase the fitness value and relying on the local auxiliary objectives, LRSAO dynamically adapts its selection strategy to optimize performance according to the landscape and unlearn previous objectives when necessary.We analyze and evaluate LRSAO on the black-box complexity version of the non-monotonic Jumpĝ.," function, with gap parameter ĝ.,", where each auxiliary objective is beneficial at specific stages of optimization. The Jumpĝ.," function is hard to optimize for evolutionary-based algorithms and the best-known complexity for reinforcement-based selection on Jumpĝ.," was O(n2 log(n)/ĝ.,"). Our approach improves over this result to achieve a complexity of (n2/ĝ.,"2 + n log(n)) resulting in a significant improvement, which demonstrates the efficiency and adaptability of LRSAO, highlighting its potential to outperform traditional methods in complex optimization scenarios.Code is available at https://github.com/FAdrien/LRSAO.
AB - We introduce Local Reinforcement-Based Selection of Auxiliary Objectives (LRSAO), a novel approach that selects auxiliary objectives using reinforcement learning (RL) to support the optimization process of an evolutionary algorithm (EA) as in EA+RL framework and furthermore incorporates the ability to unlearn previously used objectives. By modifying the reward mechanism to penalize moves that do no increase the fitness value and relying on the local auxiliary objectives, LRSAO dynamically adapts its selection strategy to optimize performance according to the landscape and unlearn previous objectives when necessary.We analyze and evaluate LRSAO on the black-box complexity version of the non-monotonic Jumpĝ.," function, with gap parameter ĝ.,", where each auxiliary objective is beneficial at specific stages of optimization. The Jumpĝ.," function is hard to optimize for evolutionary-based algorithms and the best-known complexity for reinforcement-based selection on Jumpĝ.," was O(n2 log(n)/ĝ.,"). Our approach improves over this result to achieve a complexity of (n2/ĝ.,"2 + n log(n)) resulting in a significant improvement, which demonstrates the efficiency and adaptability of LRSAO, highlighting its potential to outperform traditional methods in complex optimization scenarios.Code is available at https://github.com/FAdrien/LRSAO.
KW - EA+RL
KW - evolutionary algorithms
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/105013084128
U2 - 10.1145/3712256.3726380
DO - 10.1145/3712256.3726380
M3 - Conference contribution
AN - SCOPUS:105013084128
T3 - GECCO 2025 - Proceedings of the 2025 Genetic and Evolutionary Computation Conference
SP - 925
EP - 933
BT - GECCO 2025 - Proceedings of the 2025 Genetic and Evolutionary Computation Conference
A2 - Ochoa, Gabriela
PB - Association for Computing Machinery, Inc
Y2 - 14 July 2025 through 18 July 2025
ER -