Interacting with a smart room

De Ensiwiki
Aller à : navigation, rechercher

Abstract

The diffusion of smart-phones, tablets, interactive surfaces and in general of Post Windows/Icon/Mouse/Pointing devices, radically changed our way to approach to everyday life problems. In this scenario simple activities have been influenced by the new and powerful available tools provided by the over mentioned technologies. We implemented a system called KISS (Kitchen Interactive System Service) to evaluate the difference between different kind of interactions on user centered problem concerning the food preparation and in particular we propose an analysis based on different visualizations of the data in order to solve concrete problems in the kitchen.

Introduction

State of the Art

Related work on this topic can be split in two main categories related to two main user centered problem:

1. Cook a recipe

Cooking a recipe involves several domain from the user point of view: the use of the kitchen tools, the experience in cooking and the knowledge of adequate recipes for a given ingredient. Several prototype are been developed for the topic as the prototype by Whirpool (the Interactive Kitchen of Whirpool )where is possible to achieve a collaborative cooking, search for a recipe and also be guided in the execution. The system is made of two projected interface: the first (on the wall) display all the information about the recipe and about the collaborative work, the second (on the burner) is creating an interactive surface in order to display counter and additional information on the state of the recipe.
CES.jpg


2. Find a recipe to cook

This second problem in the related work has been solved with the use of several software and application available on tablet, smartphones and computers in general. Web sites as allRecipes.com are able to find a recipe for given ingredients as well as android and iOs application for smartphones.

Allrec.png



In this work we explore mainly the second problem and aiming to enrich the existing solutions with some new features. One of them for example is the possibility to search recipes from items that are already present in the fridge, another feature is providing different visualizations to the user for searching among the recipe and ingredients set.

The approach

Methodology

1) Designing the system.

2) Task model -> designing user interaction.

3) Implementing the system.

4) Conducting experiments.

5) Evaluation.

6) Analysis of the results.

7) Conclusion.

Implementation of the system

The KISS system has been designed to be used during the everyday cooking activity in a Kitchen. We designed the system in order to appear to the user as an application for a tablet. The application has been implemented using an on-line developing platform called Cloud9. In this workspace we set up an HTML/PHP website interacting with a MySQL database. We used a validated HTML5 template called "bootstrap" to create the skeleton of the application in order to ensure basic standard design of the system.

From an HCI point of view our system had to support the following use-cases:

  • Conduct experiments with participants on real user centered problems;
  • Conduct experiments on different type of visualizations;
  • Conduct experiments on different type of interactions;

Our first prototype of the system was made using real recipes taken from the web in order to create an evaluable prototype in term of design. The working prototype instead was populated using a Java program we implemented in order reconfigure the web application to display pseudo-randomized recipes generated from pseudo-randomized generated ingredients.

The Java generator takes in input:

  • A Food Hierarchy ;
  • The number of ingredient for category;
  • The number of demanded recipes;
  • A seed;

and creates in output:

  • A MySQL file -> used for the classical "table visualization";
  • A .json file -> for the "Zoomable Circle Packing" visualization;
  • A .csv file -> for the "Parallel coordinates" visualization;

Following figure is the content of the first page of our application in which we can see the link of two versions of our application (A and B):


Main.png


  • Version A: In this, we have the classical tabular representation of the items and recipes.
  • Version B: In this, we have the visualization of Items and Recipes.

Version A

Following figure represents the Recipe Visualization:

Recipe Search

Recipe2.png


In the figure, "Any Recipe Category" can have "Appertize", "Main Dish", "Dessert", "Special Events". There are 2 check boxes from them we can choose the Vegetarian Recipes and can find the available ingredients in the kitchen. Also Instead of "I have all the time I need" , we can choose "I have 20/40/60/120 minutes" as an option. Then we can also choose difficulty threshold from the "*" and then we can search our recipes.

If the recipes can not be made due to unavailable of ingredients in the kitchen then it shows the following message:


                                                              NoRecipe.png

Item Visualization

Following figure represents the Item Visualization where green color represents the vegetable item and white color represents the non-vegetable item.


VA Item.png


Version B

We have used the D3.js Library to perform the visualization of Items and Recipes.


Recipe Search

We have used Parallel Coordinates Visualization ([1]) for our recipe visualization in Version-B. Java generator generate the random kitchen recipes as explained before. The data of the recipes are stored in .csv file. Each colon of this file, is a filter. We have 5 different types of filter to visualize the recipe, which are as following:

  • available: Recipe is available or not (Yes/No)
  • veget: Recipe is Vegetarian or Not (Yes/No)
  • category: In which category, the recipe is. (starter/Dessert/Main Dish/Special Dish)
  • difficulty: This shows that how much difficult is to make the recipe. It varies from (0 to 4)
  • time: Using this filter, we can choose the time in which I can make the recipe.

Recipe vis2.png


Item Visualization

We have used the Zoomable Circle Visualization ([2]) for our Item visualization in Version-B.

Java Generator also generates the random kitchen ingredients/items as explained before. They are stored in a .json file. In this file, we can change the "size" which represents the amount of the ingredient present in the kitchen and "url" is the link to page where corresponding item based recipe can be found.


                        Item vis.png

In this visualization, we can zoom in/out to see the items present in the kitchen. We can click on any of the circle which will re-direct to the page where corresponding item based recipe can be found.

Interaction

The possible ways to interact with our system are:

  • direct touch
  • gestures (with the help of Virtual Mouse)

When conducting experiments we used direct touch. This kind of interaction is the most common when using tablets. Moreover, we wanted to see which visualization is more effective and comfortable for users, so we needed them to be precise.

However, in the kitchen, the direct touch is not always the best way to interact with the system (oily hands, flour everywhere). We implemented a simple interface called Virtual Mouse to enable the users use of gestures.

Virtual Mouse

The purpose of the Virtual Mouse is to use computers and other devices with gestures rather than pointing and clicking a mouse or touching a display directly.

We use webcam to capture the image, which we convert from RGB to HSV color space. Then, we apply threshold on the objects (based on their color values) and find contour of that color region. The centroid (x,y coordinates) of the contour is position of the mouse.

Software used for Virtual Mouse implementation: OpenCV, Python, PyMouse, Pyautogui.

User can interact with the system by moving his hand (mouse cursor will move according to the hand move). For now, one needs to have two objects to have the full interaction. The first object which should be of pink color, is used for positioning and moving the cursor and the second of green color, is used for clicking.

VM1.png

Experiments

Experiment Protocol

  1. 12 participants were asked to test KISS system.
  2. Participant is informed about the goal of the system and about its functioning.
  3. To avoid the learning curve effect, start alternatively with version A (with a more classical interface) or version B (with a non usual interface) of the system
  4. Participant can play with our application and to get used to the interface for 2 minutes.
  5. A list of scenarios[see Scenarios] was provided to the participants (3 about recipes, 4 about items to be executed on both the two different versions of the system).
  6. Measuring the time: from the moment the user click on the button (Recipe or Items) to the moment he select a result.
  7. Repeat step 4 to 6 for the other version.
  8. A brief interview with the participant to collect some verbatim after performing the 7 scenarios (which version do they prefer and why, what kind of interaction(Direct touch / Virtual Mouse / Voice Control).
  9. Asked the participant to fill the questionnaire with the Likert scale (SUS) to provide additional feedback for the application.


Scenarios

  • Scenarios about recipes:
    • Scenario 1 : "Your cooking skills are not very good. You want to cook a dinner. (You may going shopping)"
    • Scenario 2 : "All shops are closed. You badly want to have a dessert now."
    • Scenario 3 : "You invited your vegetarian friends for a dinner. They are coming in an hour. What can you cook for them?"
  • Scenarios about items:
    • Scenario 4 : "Check if you have Yellow Onion"
    • Scenario 5 : "You want to eat Red Apple dish, find it in the application "
    • Scenario 6 : "Check if you have Yellow Cheese"
    • Scenario 7 : "Your daughter loves Raw Duck recipes, find them in the application"

Interpretation and evaluation of the data

Quantitative Evaluation

In the following charts, we plot the average time and the standard deviation of the measured time for each task of the experiment. We used Standard Deviation as one of quantitative evaluation because it provides an indication of how far the individual responses to a question vary or "deviate" from the mean. Basically, it tells that how spread out the responses are ? are they concentrated around the mean, or scattered far & wide? An analysis of those graph will be further provided below.

Charts1-3.png
Charts4-6.png
Search for a recipe: Question 1 to 3

In the task 1 and 2, the graph show a more compact interval of time in the version B respect to version A. The task 3 instead seems to reverse this finding since this compact interval of time appears also to be in the version A. We interpreted this discrepancy with the learning curve effect that reduce the searching time in version A once the participant understand how to apply the right filters on the research. It is interesting to point that in version A instead since all the option are clearly visible on a first sight, user spend less time in refining the search. So in this case if we assume this normalization due to a learning effect we can say that the two visualization produce the same results on participant. This last conclusion should be further explored conducting experiments with a larger set of participants.

Find an ingredient: Question 4 to 7

Analyzing the data in task 4, 5 and 6, we can see that the time spent to find an item in version B is bigger that the one spent in version A. This phenomenon is due to the fact that in version A the items are visible at first sight from the participant which, depending on the reading ability, can find the item even without looking at the category. In version B instead more time is needed to "navigate" in the categories before having some readable ingredients. In the last question i.e. 7, probably due to a learning curve effect: participant knows where the category are placed in version B so the research is quicker, on the other hand the timing on version A doesn't really improve since the learning effect is not big as in version B. It's also interesting to see that many people found difficult to identify the category "dairy" for cheese, interfering with final result in question 6.

Following our protocol, it is possible to see that we choose to measure the amount of time spent by the user for each of the task. After Plotting the result of it, we don't have evident analysis that can state a pre-dominant success of one of the two versions on the other. Anyway some interesting consideration can be depicted. In both the experiments (Recipe and Items), it can be seen that the extrema of the average time and its Standard deviation are comparable dependently from the version A or B. For this reason, it is not possible to make two separate clusters of the time for both version, distinguish a "winning strategy". However, the graph for version-B is much more compact and shows that the task can be at least achieved in less time.


Qualitative(system usability scale) evaluation

Questionnaire.png

Scoring System Usability Scale (SUS):

  1. For odd numbered questions: subtract one from the user response.
  2. For even-numbered questions: subtract the user responses from 5.
  3. This scales all values from 0 to 4 (with four being the most positive response).
  4. Add up the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40.

Good SUS Score: A SUS score above a 68 would be considered above average. Anything below 68 is below average.

The SUS Score for our applications is as follows:

  • Version A : 68.63636364
  • Version B : 75.90909091

So, we can see that the SUS Score for Version B is more than the version A which says that users found version B more usable than the version A.

Verbatim

Just after the experiment we asked the users who were testing our application, that Which version do they prefer and why, what kind of interaction (Direct touch / Virtual Mouse / Voice Control). Following are the comments from different users:

"I prefer the version A for item and B for recipes, because I feel the usage is more appropriate but I can't choose between the both."

  • version A - why?

"It feels faster since I am used to this kind of visualization"

"I prefer version A because in B/recipes it's difficult to select intervals and in B/items I can't read all the items at first sight."

"I think classical visualization for items is better, when the amount of data is not big. It's quicker to search, because you see all of the items at once."

  • version B - why?

"I know the category, so I just choose what I want"

"Searching with the graphs is simpler. You see what you're looking for."

"Searching for recipes in a graph gives you more freedom. You can see what you're doing in a real time."

"It feels faster."

"It's easier to search for the wanted item in visual representation. It's faster because you don't have to search it the long list."

"Circled visualization of the items is better for a huge amount of data. The choice at every step is smaller, but on the other hand you have to click through everything."

"It's nice that you can select the parameters visually. It's more adequate when it comes to time."

"Easy to learn."

  • voice - would you use it?

"Nice, but it would be bizarre to use it... I prefer clicking."

"I don't know yet. I would like to try it."

"I don't trust systems controlled by voice. It often happens that you have to repeat a lot."

"I don't trust voice controlled systems, I'm not used to it. It would feel weird talking to a device."

Conclusion and Future Work

Both versions of the system have their advantages and flaws. Depending on the particular user, the preferences are different. There is no significance difference in time while doing tasks. Yet, the participants have the feel, that they are performing tasks faster in the version B of the KISS system. This may be a result of real-time feedback given to user in each step of his/her actions.

KISS system is part of the broader idea. However, we didn't have that much time during only two months to implement and test all of our previous ideas. Hence, we focused on a smaller part of the designed system, which was mandatory to go further in developing the system. We wanted to use our knowledge gathered during the course in terms of: visualization, gesture interaction and multi-user approach.

The future work is to conduct more experiments to ensure which version gives better results or if they are similar.

We would like to add more user interfaces, such as voice commands and more perfect gesture recognition for virtual mouse.

We also want to combine the two versions of the system in one to eliminate drawbacks and enhance advantages from two approaches we took. System mixed from version A and B would be more effective and simpler in use.

References

[1] http://bl.ocks.org/mbostock/7607535

[2] http://exposedata.com/parallel/