Customizable game interface for disabled people
|Titre du projet||Customizable game interface for disabled people|
|Cadre||Projets de spécialité|
|Page principale|| Projets de spécialité image
Looking at the current ways to interact with machines ,and especially with video games, as of today the industry seems to struggle to propose new tools for the disabled people on a high scale.
Indeed it can be a challenging task considering the difference between each person's disability, making it hard to propose a single solution.
Throughout this project we wanted to offer a solution for people whose mobility is limited to their head.
Hence the focus on facial recognition and being able to interact with the game through facial gestures, in our case the eyes and the mouth.
The main goal of this study is to test this way of interacting with video games and see its positive impact while comparing it to the user’s usual way of doing the same action.
For this purpose we’ll ask the user to play a single 2D platform game requiring 3 different controls (go left, go right, jumping, etc), enabling us to test different combinations, even doing several actions in one facial gestures.
Implementation for the prototype
First step would be detecting facial landmarks. Facial landmarks are used to localize and represent salient regions of the face, such as:
For this Project we would be using only eyes, mouth and nose (few landmarks for computational efficiency). For this purpose we will use dlib library and OpenCV.
Once we have the Facial landmarks (i.e, coordinates) of the facial landmarks, we would be working to map the crossing of a threshold to keyboard or mouse buttons.
Second step would be to calculate the distance between those detected landmarks points. Once we have the distance between the two of them, we will be using some threshold to activate keyboard buttons.
For example, closing the eyes leads to reduce the distance between the eye points which leads to perform an action (i,e, Pressing any keyboard button), Similarly different movements like Opening of mouth, Closing one eye or any other Combinations.
The output for facial landmark detection would be as follows :
We would be using Pyautogui library to take control of the keyboard and the mouse buttons.
As mentioned above, once we have given some threshold for the distance between the points, we assign the buttons to be activated. (example: right eye closing for right arrow, left eye for left arrow etc.).
Graphical User Interface
For a User friendly experience for disabled people, we wanted the user to have as much freedom as possible when it comes to selecting which facial gesture to which to which button.
To do so, we have made a Graphical User Interface where the User can select his preferences (threshold, facial gestures, etc.) and assign them to the corresponding keys, we will be using Qt library to make it.
The GUI would be loaded with default settings, later on, the user is free to select his preferences using ticking checkboxes and sliders. Once he’s set, he can click on the start button to resume the game.
This way the user could choose which facial gesture to link to which key, which threshold to set for it etc. all of that to make this tool usable by the highest possible number of users (knowing that each of them can have different preferences depending on their disability).
The Graphical User Interface looks as follows :
Planning of the experiment
For this experiment, we will conduct the test with a 2D platforming video game, serving both as a playground to test different combinations of facial gestures and the place to do the actual experiment.
For this purpose we set a default configuration for the facial gestures :
- Right eye for right arrow
- Left eye for left arrow
- Opening mouth for jumping
- Both eyes for going right and jumping
- Potentially head orientation for one of the two directions
The idea is to let the user get used to the controls for a few minutes and possibly ask for another configuration depending on his preferences.
Once this is done we’ll ask the user to get all the bonuses of the first screen (by hitting the boxes with a question mark from below) and then get to the second pipeline. We will ask them to do it three times in a row and will measure their performance each time to see if there are some improvements along the way.
For the setup of this experiment, we only need a room with enough light to get detected well by the facial recognition program. In our case it changed depending on the person’s disponibilities, thus it went from the library’s entrance to the fablab at Ensimag.
We use the computer’s webcam to do it so we only needed a place to seat together and set the camera such that the user can both see the screen and the camera can see his face.
- The first participant is not disabled and played the game for the sake of comparison
- The second participant could not blink with his right eye so we had to use his head’s orientation to link it to the right arrow
- The third participant was wearing glasses, making it difficult for the facial recognition sometimes, and got into a tricky situation on his third run by breaking a brick and in consequence had to make a difficult manoeuvre to finish the task
Once the experiment had been done, we gave them a survey to get their feedback on different aspects of the prototype.
Considering it is for playing video games, the fact that it isn’t too cumbersome for the user to really get into the game is particularly important, thus asking about the entertaining aspect of the experiment is important in our case.
On the positive side, we can summarize it like this:
- It is simple to use
- Its customizability makes it a great addition to their current interaction
As for the negative aspect:
- There is some delay for the activation, making it a bit clunky
- The available facial expressions are a bit limited as of now
Some of the answers were proposing to add specific facial expressions to make it more versatile (for example being able to pout lips to activate a command on top of opening the mouthsticking out your tongue for directions, etc.).
Overall they found it interesting considering that it can open perspectives in the way people who cannot use their arms interact with their computer or , in particular, with video games.
Conclusion & Discussion
Looking at our results, comparing it to their usual way of interacting with the machine, people had mixed feelings. We can conclude that our interaction is not better than their current interaction, at least not in this state.
They found it interesting though as they found the experiment to be entertaining and the system easy to use overall but it’s still difficult for them to compare it directly to what they’re using. It is a good way of interacting in this situation, that is playing a 2D platforming game, but it is not better than the one they use for their everyday’s life.
What we could do to improve the current prototype and address the current issues:
- Work on the delay so that it’s real-time or almost real-time, for now we were limited by our material, with a better gear we could achieve those results and thus improve the experiment.
- The lack of different facial expressions is due to our current tool which can only predict points around the eyes and the mouth, but it could be improved with another program better optimized to allow for more freedom on that aspect
- We could have a profile for each user (in JSON, XML or CSV) to make it easier to change parameters settings from one person or one game to the other.
Trivedi, A. R., Singh, A. K., Digumarti, S. T., Fulwani, D., & Kumar, S. (2013). Design and Implementation of a Smart Wheelchair. Proceedings of Conference on Advances In Robotics - AIR ’13.
Gerling, K. M., Mandryk, R. L., & Kalyn, M. R. (2013). Wheelchair-based game design for older adults. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS ’13.
Carrington, P., Hurst, A., & Kane, S. K. (2013). How power wheelchair users choose computing devices. Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility - ASSETS ’13.