Customizable game interface for disabled people

De Ensiwiki
Révision de 13 janvier 2020 à 20:17 par Privarar (discussion | contributions) (Results)

Aller à : navigation, rechercher
Project schedule.png
Titre du projet Customizable game interface for disabled people
Cadre Projets de spécialité
Page principale Projets de spécialité image

Encadrants Céline Coutrix

Students


Context

Introduction

Looking at the current ways to interact with machines and especially with video games as of today, the industry seems to struggle to propose new tools for the disabled people on a high scale.

Indeed it can be a challenging task considering the difference between each people’s disability, making it hard to propose one single solution.

Throughout this project we wanted to offer a solution for people whose movements are limited to their head. Hence the focus on facial recognition and being able to interact with the game through facial gestures, in our case the eyes and the mouth.


Addressed problem

Objective

The main goal of this study is to test this way of interacting with video games and see its positive impact while comparing it to the user’s usual way of doing the same action.

For this purpose we’ll ask the user to play a single 2D platform game requiring 3 different controls (go left, go right, jumping), enabling us to test different combinations, even doing several actions in one facial gestures.


Approach

Implementation for the prototype

Facial Recognition

First, we use … to recognise the face and build the model predicting points around the eyes and the mouth. We’ll then use them to measure the distance between two points and defining a threshold at which something should be activated (opening mouth/closing eyes up to a certain point).

Key Binding

Then we have … to link what was activated to a key on the keyboard (example: right eye closing for right arrow, left eye for left arrow etc.).

Graphical User Interface

Once this is done we also have … to build the GUI which would be the way to tune the experiment for the user. This way the user could choose which facial gesture to link to which key, which threshold to set for it and all of that to make this tool usable by the highest number of users possible (knowing that each of them can have different preferences depending on their disability).

Planning of experiment

For this experiment, we will conduct the test with two different video games. The first one being a platforming game serving more as a playground to test different combinations of facial gestures.

For this purpose we set a default configuration for the facial gestures :

  • right eye for right arrow
  • left eye for left arrow
  • opening mouth for jumping
  • both eyes for going right and jumping

The idea is to let the user get used to the controls for a few minutes and possibly ask for another configuration depending on his preferences.

Next step will be to make the user play a second video requiring timing. The game is … and requires only two controls: one for accelerating, another one for breaking. Our guess is that left eye for breaking and right eye for accelerating is a good configuration to first try it out, then again, the user could change it depending on his preferences.

We let the user try to pass the first level for a few minutes and measure the results.

Setup

For the setup of this experiment, we only need a room with enough light to get detected well by the facial recognition program. In our case it changed depending on the person’s disponibilities, thus it went from the library’s entrance to an amphi at Ensimag.

We use the computer’s webcam to do it so we only needed a place to seat together and set the camera such that the user can both see the screen and the camera can see his face.

We then ask the participant to fill up a survey asking both for his feelings about the experiment overall and what he would add to improve it. Considering it is for playing video games, the fact that it isn’t too cumbersome for the user to really get into the game is particularly important, thus asking about the entertaining aspect of the experiment is important in our case.

First participant

Results

Measured time

Sidenote:

  • The third participant made a mistake in third try
  • The second participant could not blink with his right eye so we had to use his head’s orientation to link it to the right arrow
  • The first participant is not disabled and played the game for the sake of comparison


User feedback

Once the experiment had been done, we gave them a survey to get their feedback on different aspects of the prototype.

Considering it is for playing video games, the fact that it isn’t too cumbersome for the user to really get into the game is particularly important, thus asking about the entertaining aspect of the experiment is important in our case.

On the positive side, we can summarize it like this:

  • It is comfortable in terms of use overall
  • It is simple to use
  • Its customizability makes it a great addition to their current interaction

As for the negative aspect:

  • There is some delay for the activation, making it a bit clunky
  • The available facial expressions are a bit limited as of now

Some of the answers were proposing to add specific facial expressions to make it more versatile (for example being able to pout lips to activate a command on top of opening the mouth).

Overall they found it interesting considering that it can open perspectives in the way people who cannot use their arms interact with their computer or , in particular, with video games.

Conclusion & Discussion

Looking at our results, comparing it to their usual way of interacting with the machine, people had mixed feelings. We can conclude that our interaction is not better than their current interaction, at least not in this state.

They found it interesting though as they found the experiment to be entertaining and the system easy to use overall but it’s still difficult for them to compare it directly to what they’re using.

What we could do to improve the current prototype and address the current issues:

  • Work on the delay so that it’s real-time or almost real-time, for now we were limited by our material, with a better gear we could achieve those results and thus improve the experiment.
  • The lack of different facial expressions is due to our current tool which can only predict points around the eyes and the mouth, but it could be improved with another program better optimized to allow for more freedom on that aspect
  • We could have a profile for each user (in JSON, XML or CSV) to make it easier to change parameters settings from one person or one game to the other.