Adopel: Adaptive data collection protocol using reinforcement learning for vanets

Ahmed Soua, Hossam Afifi

Research output: Contribution to journalArticlepeer-review

Abstract

Efficient propagation of information over a vehicular wireless network has usually remained the focus of the research community. Although, scanty contributions have been made in the field of vehicular data collection and more especially in applying learning techniques to such a very changing networking scheme. These smart learning approaches excel in making the collecting operation more reactive to nodes mobility and topology changes compared to traditional techniques where a simple adaptation of MANETs propositions was carried out. To grasp the efficiency opportunities offered by these learning techniques, an Adaptive Data collection Protocol using reinforcement Learning (ADOPEL) is proposed for VANETs. The proposal is based on a distributed learning algorithm on which a reward function is defined. This latter takes into account the delay and the number of aggregatable packets. The Q-learning technique offers to vehicles the opportunity to optimize their interactions with the very dynamic environment through their experience in the network. Compared to non-learning schemes, our proposal confirms its efficiency and achieves a good tradeoffbetween delay and collection ratio.

Original languageEnglish
Pages (from-to)2182-2193
Number of pages12
JournalJournal of Computer Science
Volume10
Issue number11
DOIs
Publication statusPublished - 1 Jan 2014
Externally publishedYes

Keywords

  • Collection ratio
  • Data collection
  • Number of hops
  • Qlearning
  • Reinforcement learning
  • Vehicular Ad Hoc Networks (VANETs)

Fingerprint

Dive into the research topics of 'Adopel: Adaptive data collection protocol using reinforcement learning for vanets'. Together they form a unique fingerprint.

Cite this