Extended LazyNav: Virtual 3D Ground Navigation for Large Displays and Head-Mounted Displays

Parinya Punpongsanon, Emilie Guy, Daisuke Iwai, Kosuke Sato, Tamy Boubekeur

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents the extended work on LazyNav, a head-free, eyes-free and hands-free mid-Air ground navigation control model presented at the IEEE 3D User Interfaces (3DUI) 2015, in particular with a new application to the head-mounted display (HMD). Our mid-Air interaction metaphor makes use of only a single pair of the remaining tracked body elements to tailor the navigation. Therefore, the user can navigate in the scene while still being able to perform other interactions with her hands and head, e.g., carrying a bag, grasping a cup of coffee, or observing the content by moving her eyes and locally rotating her head. We design several body motions for navigation by considering the use of non-critical body parts and develop assumptions about ground navigation techniques. Through the user studies, we investigate the motions that are easy to discover, easy to control, socially acceptable, accurate and not tiring. Finally, we evaluate the desired ground navigation features with a prototype application in both a large display (LD) and a HMD navigation scenarios. We highlight several recommendations for designing a particular mid-Air ground navigation technique for a LD and a HMD.

Original languageEnglish
Article number7501805
Pages (from-to)1952-1963
Number of pages12
JournalIEEE Transactions on Visualization and Computer Graphics
Volume23
Issue number8
DOIs
Publication statusPublished - 1 Aug 2017
Externally publishedYes

Keywords

  • 3D user interface
  • navigation
  • spatial interaction
  • virtual reality

Fingerprint

Dive into the research topics of 'Extended LazyNav: Virtual 3D Ground Navigation for Large Displays and Head-Mounted Displays'. Together they form a unique fingerprint.

Cite this