Exploration / exploitation trade-off in mobile context-aware recommender systems

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The contextual bandit problem has been studied in the recommender system community, but without paying much attention to the contextual aspect of the recommendation. We introduce in this paper an algorithm that tackles this problem by modeling the Mobile Context-Aware Recommender Systems (MCRS) as a contextual bandit algorithm and it is based on dynamic exploration/exploitation. Within a deliberately designed offline simulation framework, we conduct extensive evaluations with real online event log data. The experimental results and detailed analysis demonstrate that our algorithm outperforms surveyed algorithms.

Original languageEnglish
Title of host publicationAI 2012
Subtitle of host publicationAdvances in Artificial Intelligence - 25th Australasian Joint Conference, Proceedings
Pages591-601
Number of pages11
DOIs
Publication statusPublished - 26 Dec 2012
Event25th Australasian Joint Conference on Artificial Intelligence, AI 2012 - Sydney, NSW, Australia
Duration: 4 Dec 20127 Dec 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7691 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference25th Australasian Joint Conference on Artificial Intelligence, AI 2012
Country/TerritoryAustralia
CitySydney, NSW
Period4/12/127/12/12

Keywords

  • artificial intelligence
  • exploration/exploitation dilemma
  • machine learning
  • recommender system

Fingerprint

Dive into the research topics of 'Exploration / exploitation trade-off in mobile context-aware recommender systems'. Together they form a unique fingerprint.

Cite this