Accueil/Tactile feedback for GPS from steering

De Ensiwiki
Aller à : navigation, rechercher

Students

Introduction

Background

Driving is, for many, a day to day necessity and provides a specialized context in which the user has to operate: one that has a high cognitive load and does not allow room for much error.

There are many factors contributing to the high cognitive load. The user has to be cognizant of variables such as the road itself, the surrounding cars, traffic signals, side controls (such as adjusting air conditioning or the music), potentially other passengers in the car, and if navigating to a specific location that is not yet known to the user, following directions.

Errors that occur in this process can be potentially fatal. Therefore, safety is of utmost importance. The driving context is in need of hands and eyes-free interaction.

State of the Art

Current GPS systems use voice in combination with a real time representation of the road. The main concern is the need to constantly take your eyes off the road in order to consult the display, and it is often ambiguous or hard to tell when a turn is approaching, thus causing the user to consult the map even more and spend more time not looking at the road the greater this ambiguity is.

Interacting with the GPS normally, such as Google Maps requires either a mobile application or screen of some sort, which serves as a major distraction for the driver.

Figure 1 - A Navigation Map (google maps)

Approach

Objectives

Our objective is to take advantage of the fact that the driving context means, ideally, the user’s hands are on the wheel at all times while their eyes are on the road. Current GPS systems go exactly against these two variables.

To this end, we propose a way to receive feedback from the GPS through vibration from the steering wheel instead of the traditional voice or visual feedback. As the user approaches a right turn, the right side of the steering wheel will vibrate to signal to the user that it is time to turn right. Likewise for left turns.

This way, the user gets quick feedback from the system while still maintaining his eyes on the road at all times. Further, it encourages safer driving in that both hands be placed on the steering wheel at all times in order to feel the vibrations on both sides, a practice that allows more control of the car and faster reactions but that many people neglect.

Implementation

To prototype, we chose to run on a Lenovo y700 Gaming laptop, a driving simulator called City Car Driving1 and marked out a specific path, from point A to B, with 11 turns in total for the user to drive through. We used an external steering wheel (Yoke) and pedals for braking and accelerating, to more closely match the driving experience.

Figure 2 - The path (highlighted in yellow with starting and ending points A and B labeled) that we created for users to drive through

In order to simulate voice command and vibration commands we used the wizard of oz approach. We stood behind the subjects and told them what the next turn was that was approaching. The same for vibration, we fabricated our own device using a 2 Channel RF receiver with A/B remote control transmitter connected to two vibration motors that was taped to each side of the steering wheel, and a press of a button from the remote control would vibrate each side accordingly.

The game provided a bird’s eye view mini map that showed the user’s current position relative to his nearby surroundings and we printed out a full-size map that showed the entire path and marked out the path itself to simulate the map that a GPS would provide.

We also added an Eye tracker (The Eye Tribe) in order to compare for each interaction the amount of time spent on the screen.

Figure 3 - Prototype fully setup and under experiment

Experimentation

Procedure

For the experiment, we wanted to test which was more effective: to receive feedback through voice or to receive feedback through vibration.

We chose to test on 4 subjects.

For each subject, we first gave a 15 minutes practice run to allow them to get acclimated to the driving environment, such as the sensibility of the steering wheel/pedals and the controls for changing gears.

Then, we ran the actual measured test with the users. In order to avoid a biased experiment we switched between the two variables (interaction type (voice or vibration) and Starting/Ending point (A to B or B to A) See map above) for each subject.

Figure 4 - Experiment setup for 4 subjects

The evaluation criteria for each test run is described below.

Evaluation Criteria

We decided on a few metrics at the end of each test run. For the evaluation, we chose 3 objective measures and 1 subjective measure:


1- Time taken to complete the path

2- Number of errors made while driving (supplemented by the game itself):

2.a- Number of collisions

2.b- Number of times went off road

2.c- Number of speed violations

2.d- Number of missed/incorrectly made turns

3- Data from the eye tracker (x and y positions of each eye at quarter second intervals)

4- Nasa-Task Load Index (TLX) score[2]

  • Below are some sample questions from the Nasa-TLX survey:
Figure 5 - Nasa-TLX questionnaire sample

Potential Issues

We ran the experiment on 4 subjects only. In our case, unfortunately, we were limited in terms of time and resources (willing test subjects), but the ideal would have been to have more test subjects to solidify our conclusions.

We used the Nasa-TLX score as one of our metrics. It was originally developed for pilots and the experience of flying, but we applied it to driving as well. There is a driving specific measurement called the Driving Activity Load Index (DALI)[3] that was developed, but we were unable to find the specific questionnaire and process online so instead we stuck to the Nasa-TLX, for which there was more information. Some differences between the Nasa-TLX score and the DALI include less emphasis on physical demand in the DALI (because driving is not a particularly physically strenuous activity) and more emphasis on temporal demand. Therefore, this measurement may not completely be suited for our experiment but since it has been also used in other cognitive load driving experiments (Nilsson and Alm, 1995)[4] , (Harbluk, Noy and Eizenman, 2002)[5] we determined that it was the closest match and relevant enough to be of significance.

Results

Finishing Times

Below is a bar chart of the finishing times of the voice and vibration runs for each subject.

Figure 6 - Bar chart showing finishing times

We can observe that every person finished the voice run slightly faster than the vibration run.

Errors

The errors made by each person and during each run are displayed below.

Figure 7 - Bar chart showing errors made by each subject on both runs

A pie chart of the aggregations of all the errors made by each person for each run, separated by voice and vibration:

Figure 8 - Pie chart showing all errors, separated by interaction type

Interesting things to note:

  • No wrong turns were made using voice
  • 3 out of 4 of the participants made a wrong turn using vibration

Eye Tracker

The data from the eye tracker based on the voice and vibration runs for a single subject for the left eye over time are plotted below. Each point represents the center coordinate of the eye at that time. 0.5 represents the eye focusing directly on the center of the screen. We chose to only examine the data from the left eye for simplicity under the assumption that the right eye (which the eye tracker provided separate data for) exhibited more or less the same behavior.

Figure 9 - Eye Tracker data for a single subject for the left eye

We can infer from the data that the spikes represent the eye having to suddenly look away (more visible and obvious in the y coordinate), showing that the user lost focus momentarily and had to look away from the road. From this data, we can see that the vibration run has slightly more spikes than the voice run, indicating that the user was slightly more distracted while driving (about 5 spikes for voice in the y direction vs. 7 for vibration in the y direction). The other participants’ eye tracking data featured similar patterns.

Had we made a trial run using a head up display (with neither vibration nor voice), we hypothesize that the eye tracker data would’ve been much more erratic and displayed a lot more spikes because of the constant need to refer to the display while driving instead of getting eye-free feedback from vibration or voice.

Nasa-TLX

For all 4 test subjects, users reported higher scores in almost all categories (mental demand, physical demand, temporal demand, effort, and frustration) besides performance for the vibration. It seems clear that from a user experience standpoint, the vibration was not as good of a user experience.

For an example of the format of the scores, here are the results from one of the subjects for voice command:

Figure 10 - NASA-TLX Scores for one subject

User Comments

Users made the following comments about the vibration implementation:

  • It was sometimes hard to tell which was the left vibration and which was the right vibration because the whole steering wheel shook rather uniformly no matter the side. This can be improved by decreasing the intensity of the vibration so that the effects are more localized.
  • Users did not really look at the print out map that we had created.


Future Work

This experiment increased our curiosity in this topic and shed the light on few improvement points. A higher fidelity prototype including a more flexible driving simulation software with a GPS navigation feature, automatic trigger for voice and vibration commands, and a good car steering wheel set, might yield to more accurate results. Although our experiment focused on different types of interactions, we would like also to study the effects using different type of GPS that is currently being introduced in the market (HUD)

Conclusion

We see that, on all our measurements, voice feedback was more efficient and more user friendly than vibration feedback. Users completed the course faster, made fewer errors, reported higher subjective scores on the NASA-Tlx feedback form, and kept their eyes on the road more.

However, important points to keep in mind are that the results do not reflect the unfamiliarity of this novel form of driving through vibration rather than voice and the implicit cons that exist within voice feedback, such as inability to use it while in a noisy setting (ie. driving with a lot of passengers or using it with loud music) and the implicit pros that exist within vibration feedback (ie. since the vibration occurs on both sides of the steering wheel, people are more encouraged to keep both hands on the wheel while driving, something that will improve their reaction time and therefore promotes safety overall).

Another thing to keep in mind is that vibration is a very novel form of feedback with a GPS and users are likely much more comfortable with existing forms of interaction. Therefore, perhaps a certain amount of acceptance and adaptation is required first.

Overall, we can conclude that voice feedback is more effective on our measurement scales with current state of the art.

References

[1] http://citycardriving.com/

[2] Sandra G. Hart. NASA-TASK LOAD INDEX (NASA-TLX); 20 YEARS LATER. NASA-Ames Research Center

[3] Julie Paxion, Edith Galy and Catherine Berthelon. Mental workload and driving. Front Psychol. 2014; 5: 1344.

[4] Håkan Alm and Lena Nilsson. The effects of a mobile telephone task on driver behaviour in a car following situation. Accident Analysis & Prevention Volume 27, Issue 5, October 1995, Pages 707–715

[5] Joanne L. Harbluka, Y. Ian Noyb, Patricia L. Trbovicha and Moshe Eizenmanc. An on-road assessment of cognitive distraction: Impacts on drivers’ visual behavior and braking performance. Accident Analysis & Prevention Volume 39, Issue 2, March 2007, Pages 372–379