TY - GEN
T1 - Softwarized and distributed learning for SON management systems
AU - Daher, Tony
AU - Jemaa, Sana Ben
AU - Decreusefond, Laurent
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/6
Y1 - 2018/7/6
N2 - Self-Organizing Networks (SON) functions have already proven to be useful for network operations. However, a higher automation level is required to make a network enabled with SON capabilities respond as a whole to the operator's objectives. For this purpose, a Policy Based SON Management (PBSM) layer has been proposed to manage the deployed SON functions. In this paper, we propose to empower the PBSM with cognition capability in order to manage efficiently SON enabled networks. We focus particularly on the implementation of such a Cognitive PBSM (C- PBSM) on a large scale network and propose a scalable approach based on distributed Reinforcement Learning (RL): RL agents are deployed on different clusters of the network. These clusters should be defined in such a way that the RL agents can learn independently. As the interaction between these clusters may evolve in time due for instance to traffic dynamics, we propose a flexible implementation of this C-PBSM framework with dynamic clustering to adapt to network's evolutions. We show how this flexible implementation is rendered possible under Software Defined Networks (SDN) framework. We also assess the performance of the proposed distributed learning approach on an LTE- A simulator.
AB - Self-Organizing Networks (SON) functions have already proven to be useful for network operations. However, a higher automation level is required to make a network enabled with SON capabilities respond as a whole to the operator's objectives. For this purpose, a Policy Based SON Management (PBSM) layer has been proposed to manage the deployed SON functions. In this paper, we propose to empower the PBSM with cognition capability in order to manage efficiently SON enabled networks. We focus particularly on the implementation of such a Cognitive PBSM (C- PBSM) on a large scale network and propose a scalable approach based on distributed Reinforcement Learning (RL): RL agents are deployed on different clusters of the network. These clusters should be defined in such a way that the RL agents can learn independently. As the interaction between these clusters may evolve in time due for instance to traffic dynamics, we propose a flexible implementation of this C-PBSM framework with dynamic clustering to adapt to network's evolutions. We show how this flexible implementation is rendered possible under Software Defined Networks (SDN) framework. We also assess the performance of the proposed distributed learning approach on an LTE- A simulator.
U2 - 10.1109/NOMS.2018.8406173
DO - 10.1109/NOMS.2018.8406173
M3 - Conference contribution
AN - SCOPUS:85050650414
T3 - IEEE/IFIP Network Operations and Management Symposium: Cognitive Management in a Cyber World, NOMS 2018
SP - 1
EP - 7
BT - IEEE/IFIP Network Operations and Management Symposium
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE/IFIP Network Operations and Management Symposium, NOMS 2018
Y2 - 23 April 2018 through 27 April 2018
ER -