Assessment of accuracy for target detection in 3D-space using eye tracking and computer vision

CHU Sainte-Justine, Rehabilitation Chair of Engineering Applied to Pediatrics, Montréal, Québec, Canada
Mechanical engineering, Polytechnique Montréal, Montréal, Québec, Canada
DOI
10.7287/peerj.preprints.2718v1
Subject Areas
Human-Computer Interaction, Real-Time and Embedded Systems
Keywords
Eye tracking, 3D environment, Visual control, Computer vision
Copyright
© 2017 Leroux et al.
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Leroux M, Achiche S, Raison M. 2017. Assessment of accuracy for target detection in 3D-space using eye tracking and computer vision. PeerJ Preprints 5:e2718v1

Abstract

Over the last decade, eye tracking systems have been developed and used in many fields, mostly to identify targets on a screen, i.e. a plane. For novel applications such as the control of robotic devices by the user vision, there is a great interest in developing methods base on eye tracking to identify target points in free three dimensional environments. The objective of this paper is to characterise the accuracy the eye tracking and computer vision combination that was designed recently to overcome many limitations of eye tracking in 3D space. We propose a characterization protocol to assess the behavior of the accuracy of the system over the workspace of a robotic manipulator assistant. Applying this protocol to 33 subjects, we estimated the behavior of the error of the system relatively to the target position on a cylindrical workspace and to the acquisition time. Over our workspace, targets are located on average at 0.84 m and our method shows an accuracy 12.65 times better than the calculation of the 3D point of gaze. With the current accuracy, many potential applications become possible, such as visually controlled robotic assistants in the field of rehabilitation and adaptation engineering.

Author Comment

This is a preprint submission to PeerJ Preprints