Customizable game interface for disabled people : Différence entre versions

De Ensiwiki
Aller à : navigation, rechercher
Ligne 33 : Ligne 33 :
== Approach ==
== Approach ==
Implementation for the prototype:
== Implementation for the prototype: ==
'''Facial Recognition'''
'''Facial Recognition'''

Version du 13 janvier 2020 à 04:58

Project schedule.png
Titre du projet Customizable game interface for disabled people
Cadre Projets de spécialité
Page principale Projets de spécialité image

Encadrants Céline Coutrix




Looking at the current ways to interact with machines and especially with video games as of today, the industry seems to struggle to propose new tools for the disabled people on a high scale.

Indeed it can be a challenging task considering the difference between each people’s disability, making it hard to propose one single solution.

Throughout this project we wanted to offer a solution for people whose movements are limited to their head. Hence the focus on facial recognition and being able to interact with the game through facial gestures, in our case the eyes and the mouth.

Addressed problem


The main goal of this study is to test this way of interacting with video games and see its positive impact while comparing it to the user’s usual way of doing the same action.

For this purpose we’ll ask the user to play a single 2D platform game requiring 3 different controls (go left, go right, jumping), enabling us to test different combinations, even doing several actions in one facial gestures.


Implementation for the prototype:

Facial Recognition

First, we use … to recognise the face and build the model predicting points around the eyes and the mouth. We’ll then use them to measure the distance between two points and defining a threshold at which something should be activated (opening mouth/closing eyes up to a certain point).

Key Binding

Then we have … to link what was activated to a key on the keyboard (example: right eye closing for right arrow, left eye for left arrow etc.).

Graphical User Interface

Once this is done we also have … to build the GUI which would be the way to tune the experiment for the user. This way the user could choose which facial gesture to link to which key, which threshold to set for it and all of that to make this tool usable by the highest number of users possible (knowing that each of them can have different preferences depending on their disability).


Conclusion & Discussion