Eye Tracking Assistant Pointing

De Ensiwiki
Aller à : navigation, rechercher


Students

This project presents the work of:

supervised by NIGAY Laurence (IIHM, LIG)

Introduction

From the last century, pointing devices have become more and more popular. Among these devices, touch screen is the most famous one and is now quite intuitive for users. However this device requires a direct contact between the user and the device.

To achieve indirect contact interaction the mouse is one of the most common device but it is not always as accurate as the user would it to be. Combining this device with another one could be a solution to improve the accuracy. Pointing with a finger can be an intuitive solution at first but it is tiring for a user to raise his arm for a long time and it can even lead to cramps. Therefore tracking the eye movements could be a good alternative to point objects since it requires more natural gestures. However it can be difficult for a user to fixate a region for a long time, unconscious eye movements can happen and skew the results.

Approach

There are several methods to select a target by finger pointing. The main problem of these methods concerns the user frustration and inaccuracy in selecting the target.

We proposed to add a second pointing device with an eye tracker to improve the accuracy and avoid the frustation. The first issue is to determine whether the human's eyes follow the target before clicking on it with a mouse and more specifically if seeing his gaze can help him be more accurate.

Objectives

Our main objective is to study the impact of the combination of the mouse and the eye tracking to improve the accuracy of the mouse as a pointing device.

Implementation

It can be difficult for an eye to fixate a specific target long enough to be perceived by the eye tracker and unconscious eye movements can occur and skew the results. To counter this issue, we display the Voronoi regions (see definition in the Appendix) of our interface according to its widgets. Therefore if the user's eyes move a little unconsciously the gaze will remain in the same region. Thus we can say that a widget is activated as soon as the gaze is in its Voronoi region.

We create our prototype according to these steps:

  • Create a User Interface composed of widgets whose locations, forms and colors are specified in a separate file
  • Compute the Voronoi Diagram of the widgets of this UI
  • Use an Eye Tracker to obtain the coordinates of the gaze
  • Highlight the Voronoi region corresponding to these coordinates
  • Activate the widget corresponding to this region

For our experiment we use an eye tracker of the brand EyeTribe (link in the references).

You can find below an example of our interface without the Voronoi regions, with every regions and with the one that will follow the participant's gaze.

Normal interface
Interface with all Voronoï's regions
Interface with the Voronoï's region selected by the gaze

Experiment

Procedure

For our experiments, we have created 18 test files. Each file contains the widgets' scale, the number of widgets to display and the features of each widget. A widget is associated to its coordinates in the plane, a form and a color. The form can be square, triangle, diamond-shaped or octagon and the color red, yellow, green, blue or purple. Moreover, each widget is surrounded by a black rectangle that represents its button.

During the experiments, the user will have to find a unique target that we define as a purple triangle. There is always only one purple triangle defined in each test files. The target's position is randomly defined each time even for a same test file. The user is allowed one click and then another file appears.

The experimentation phase is composed of a learning phase and a test phase.

The learning phase helps the user be more comfortable with the interaction. This phase contains three steps:

  • Step 1: the user has to find the target and especially click with the mouse on its surrounding button for all the 18 test files without any help or indication
  • Step 2: the user does the same exercise with the same test files (the target's position is changed) but now the Voronoi region of his gaze is displayed on the interface
  • Step 3: the user can exercise himself with random test files and with the display of the Voronoi region of his gaze

In the test phase, the user remakes step 1 and 2 of the learning phase with the same test files but each time the location of the target changed randomly.

Example of a experiments

For each test and right after the click we save the following data :

  • the target's and the click's positions
  • the target's size
  • the time spent on the test
  • a boolean corresponding to the success of the click on the target
  • a boolean for the success of the click on the target's Voronoi region
  • a boolean to check whether the user has looked at the target or not

The experimentation phase has been run on 11 participants with different abilities using a computer and each user chooses the time of the third step of the learning phase. The user's ability, his age and if he wears glasses was saved for each user.

After the experiment the user has to answer several questions:

  • How many targets do you think you clicked right with the Voronoi during Step 2 of the learning phase ?
  • How many targets do you think you clicked right with the Voronoi during the test phase ?
  • Do you find the eye tracker useful ?
  • What was your first impression before the experiment ? after ?
  • Do you have some remarks about this protocol ?

Potential Issues

The eye tracker is among the recent technologies that have still some unsolved issues. First of all, the calibration is not at 100% accurate because of the delay that can occur between the user's gaze movement and the recording of this gaze by the device and the constraint that the user must not move during the experiment. Moreover whenever the user moves during the experiment the calibration has to be remade to correspond to the new position. Finally, there can be issues for users that have glasses and especially if these glasses are too thick because the eye tracker will not be able to follow the gaze correctly.

Results

Results and Interpretation

Before without.png
Before with.png

In this results we can see that before the training when the voronoï is not visible every user try to click perfectly on the button. For the part with the Voronoï seen, we see a relax of accuracy by the users.

After without.png
After with.png

After training we see a similar accuracy between the two tasks and we see the benefit of the second task with the Voronoï.

Look before.png
Look after.png

With this two graphs we can see that the users often look the target when they click knowing the fact that we stabilize the movement of the eyes. For this result the user 7 get some trouble with his glasses so his low result can be due to material problem.

User Feedbacks

The experiments have been run on 11 participants with different profiles. First, the user's age were between 21 and 73 years. Then, there were mainly two types of users : 7 normal persons and 4 more experimented with a computer. Two persons wore glasses. In general, the users feedbacks are quite good. Most of the participants find the interaction interesting and useful. Moreover they particularly enjoy the fact that the button seen by the gaze was highlighted which help them to select the target more easily. Some precise that it also help them find the target faster. Finally, letting the user choose his practice's time has been really appreciated among the participants.

Conclusion

Protocol criticism

The protocol is not perfect and can be improved for future works in this domain. First the interface is not very good esthetically and this can be an obstacle to a efficient research for some users. Then the participants were asked to be the faster as possible during the experiments. However some would prefer to be more accurate and precise even though they lose some time. We remark that the elder users have more difficulty in following the request of time. At last, the time spent on each test by the user is quite difficult to exploit because we can't differentiate the search time and the selection's time. The search time could perhaps have been reduced by making a more visible target.

Future Work

This project is a first step of an improvement of the accuracy of the mouse as the pointing device by combining it with an eye tracker. During the experiments and especially the processing of the participants' data some questions stand out:

  • From how long can we estimate that a element is selected by a gaze ?
  • How can we be sure that a user wants an element uniquely by his gaze ?
  • How do eyes move during a search on a screen ?

These questions lead to a more educational background such as a better understanding of the human's gaze and of the correlation between the mouse and the gaze which would require some psychological and orthopedic background. This brings new possible lines of approach of this issue.

Finally a new protocol close to this one can be created to take into account finger pointing. However combining the eye tracker and a finger pointing device may lead to space problems. To use the eye tracker the user needs to stay in a fixed position and quite close from the sensor. By pointing with his finger he will almost automatically move his body which would result in skewing the calibration.

References

https://hal.archives-ouvertes.fr/hal-01586677/document

https://hal.inria.fr/hal-01184544/document/

http://theeyetribe.com/theeyetribe.com/about/index.html

Appendix

De Berg M., Otfried C., Van Kreveld M., Overmars M (1997) Computational Geometry: Algorithms and Applications