< Terug naar vorige pagina

Publicatie

Computer vision techniques for automatic analysis of mobile eye-tracking data

Boek - Dissertatie

In the last four decades eye-tracking research has established itself as a powerful paradigm for studying human visual behaviour. More recently, efforts have been made to extend the application field for eye-tracking research beyond the boundaries of lab-based experiments. For example, research on marketing or on human-human interaction definitely benefit from real-life experiments. Since 1999 the concept of mobile eye-tracking is introduced. A mobile eye-tracker is de facto a sophisticated pair of glasses with a front camera, capturing the field of view, and a second camera which is directed towards the eyes and records the eye movements. Both recordings are combined to determine at which position in the field of view one is looking. The popularity of mobile eye-trackers as a measurement of user experience and behaviour in very diverse application areas is increasing rapidly. Unfortunately, this is tempered by the unfavourable property that a mobile eye-tracker produces a large amount of data that needs to be analysed. The analysis of an eye-tracking experiment can be defined as: ‘determine for how long and how often a person looks at a relevant object'. Indeed, depending on the purpose of each eye-tracking experiment, these relevant objects may vary from products on a shelf in the context of market research, up to the face of a person in an experiment on human-human interaction. In the last decade several attempts have been made to facilitate this analytical challenge. Unfortunately, the existing methods require experimental control and therefore impose restrictions on the concept of real-life mobile eye-tracking. The marker-based analysis, for example, allows for a partial automatic analysis. However, this method confines the flexibility of mobile eye-tracking. Other solutions such as automatic semantic analysis are only applicable for the analysis of a limited range of eye-tracking applications. Therefore, many eye-tracking researchers are often forced to manually analyse the recordings, which is a painstaking and time-consuming task. To overcome these issues, in this dissertation we proposed a computer vision-based framework for the semi-automatic analysis of mobile eye-tracking recordings. The goal of this PhD project was to apply computer vision algorithms for the automatic analysis of mobile eye-tracking recordings. By using computer vision algorithms to automatically detect relevant objects in images captured by the scene camera of a mobile eye-tracker we are for example able to automatically determine whether or not a person looked at the objects and how often and for how long one was looking at these relevant objects. Without doubt, efforts to automate this type of analysis can contribute to the increasing popularity of mobile eye-tracking in a broad range of applications. Developing such an analysis framework is not a trivial task since several challenges need to be tackled. First, it is of vital importance that the accuracy of the analysis is as high as possible. Furthermore, it is advisable that the automatic analysis is faster than manual analysis and even more important, that by using our framework the manual workload significantly decreases. Third, the images that we process are recorded in unconstrained environments using a wearable device. This results in challenging images in which low illumination and motion blur is often present, making the automatic analysis much more complex. Furthermore, we aim to analyse the visual behaviour w.r.t. small moving objects such as the hand gestures of another person, making the analysis even more challenging. Throughout this PhD project, we focused on four main classes to be recognised. Our analysis framework is capable of analysing the visual behaviour w.r.t. objects (such as specific products in a shopping experiment), human bodies and faces, human hands and gestures. Furthermore, we proposed a semi-automatic analysis approach in which manual intervention and automatic analysis are efficiently intertwined to ensure high accuracy even in challenging conditions. To fully validate the capabilities of our analysis framework, we recorded a broad range of eye-tracking recordings and used our framework for the validation. This profound validation revealed the applicability of our approach for various types of eye-tracking experiments.
Aantal pagina's: 177
Jaar van publicatie:2016
Toegankelijkheid:Open