Virtual chemistry lab

De Ensiwiki
Aller à : navigation, rechercher

Students


Abstract

Virtual Chemistry Lab project basically deals with providing the user with an experience of working in a chemistry lab by making use of the virtual reality technology with haptic feedback so the user can get the benefits of learning through experimenting without having to deal with disadvantages of a physical chemistry lab.


Introduction

Effective teaching and learning of science involves seeing, handling, and manipulating real objects and materials. The knowledge that students attain in classrooms would be ineffectual unless they actually observe the process and understand the relationship between action and reaction. So this makes laboratories a fundamental part of the science studies. Students take more interest in learning by performing and observing the experiments in chemistry laboratories which improves and consolidates their learning. However, physical chemistry laboratories can be very expensive and difficult to maintain.


Objective

As we know chemistry labs can have certain hazards if wrong substances are mixed together, there is always wear and tear of things, as school have limited amount of chemicals a student can perform an experiment a very limited number of times and some experiments are time consuming. To overcome the issues associated with physical chemistry labs and to give user the experience gained through performing experiments we decided to create a virtual chemistry lab for the users which allows haptic feedback.

The laboratory is completely virtual but the objects like test-tube, beakers,flasks and so on will be represented as actual physical objects so we get the haptic feedback from the user.

Such a virtual laboratory can turn all the disadvantages of physical chemistry lab into its own advantages. As we are making use of virtual chemical substances, there is no problem of any hazards. These hazards though can be displayed virtually so students know if in reality these substances are mixed there can be safety hazards. The user can perform the experiment any number of times. The experiments which take days to give results can be speed up and observed fast. As we are using physical objects in case of equipment which are generally glassware in real labs, we can make use of other sort of sturdier materials which are less prone to wear and tear.


State of the Art

Chemist

Chemist is an app which provides user with a designed chemistry laboratory where a user can perform experiments by selecting chemicals and equipment using touch. The chemicals are poured into the containers using a slider to measure the quantity. As soon as you get the quantity you want and you release the slider the chemical pours into the container. Then after the reaction is done the chemical equation corresponding to it is displayed.

Labster

Labster uses Oculus Rift to provide user with the experience of a virtual chemistry lab, giving users a feeling of immersion into non-physical world and thus allowing students to engage with science experiments. The interactions used are basic mouse interactions.

3D VR lab by Sritrusta Sukaridhoto

This lab uses virtual reality design and hand gestures for performing experiments. Oculus rift was used to provide 3D virtual reality effect and the leap motion sensor used to recognize hand gestures. As it uses just virtual reality and thus lacks haptic feedback.

These systems give the learners the ability to navigate, select, control and manipulate the objects in a virtual environment using hand gestures but they lack a haptic feedback from the user.


Haptics comprises both force and tactile feedback. Force feedback simulates object hardness, weight, and inertia. Tactile feedback is used to give the user a feel of the virtual object surface's contact geometry, smoothness, slippage, and temperature. Therefore, it is important for the user to actually feel the objects that he/she is manipulating.


Our Approach

As the above mentioned technologies are either with just a touch screen interaction or an interaction which lacks haptic feedback we came up with an interaction which is as close to natural working in chemistry lab as possible and provides a proper haptic feedback.

The interaction we have designed is a pouring technique for the user to put chemicals in a container. As we know when we work in physical chemistry labs the most common gesture one can see is of pouring chemicals into flasks,test-tubes, beakers etc. Pouring of one substance into another to create a chemical reaction and produce new substance is the most obvious and common action performed in a physical chemistry lab.

So our motivation for a new technique is derived from the actual pouring of substance in a container and the gesture is simulated in a virtual environment. The chemistry lab has a completely virtual design created using Unity but the manipulation of containers to achieve the mixing of chemicals is performed using actual physical objects. This helps in providing the much needed haptic feedback and is as close to natural as possible.


Implementation

The user has to interact with two objects:

  • test tube
  • china dish.

These objects can be seen in the figure below where the left object represents a china dish and the right object represents a test tube. The chemicals needed for performing the reaction are already present in the two containers.

The tracked objects: a china dish and a test tube.

To get the necessary haptic feedback we have these solid objects with markers on them so they can be tracked using the Optitrack motion capture system. We installed 1 camera each in the left and right corner and placed one under the ceiling, so that detection of the objects will be accurate and seen from every angle.

Optitrack system setup

In order to set up the Optitrack system and calibrate it we used a Motive software platform that is designed to control motion capture systems. Each object in Motive is created with unique user data id. The uniqueness of this id is ensured based on the orientation and positioning of the markers of the object. The object tracking is based on minimum 3 markers detection of a rigid body for 6DOF.

OptiTrack Motive

All live recorded data from Optitrack cameras is sent to Unity game engine by broadcasting frame data and then the object's movements are being rendered in the scene. The design of the virtual lab is done using Unity. The user can manipulate 2 objects by holding and rotating them.

Unity design of the scene

Evaluation

Procedure

The process of evaluation is performed in three steps:

1. The first step involves calculating time taken by the app and our system to perform a chemical reaction.

We ask the user to perform a chemical reaction on the app and our system.

The user is just presented with two containers with chemicals already in them and he is asked to perform the reaction. The user needs to think about the action needed to perform the reaction themselves. The images below shows the setup in the app:

Setup in the app for performing the reaction

The gesture required to perform the reaction is moving the slider which is displayed in the image below. Also as soon as the user is able to perform the chemical reaction, the chemical equation corresponding tho the reaction appears.

Setup in the app for performing the reaction

We will consider the pouring interaction in our system in two cases:

Case1: When only one object(test tube) is tracked.

This case applies when the user is working with only two chemicals in containers or only two equipment. The image below shows the movement of only the test tube and the china dish remains fixed on the ground. This is necessary for simple reactions where at a time only two things need to be interacted with and one of which does not need to move.

Setup in the system for performing the reaction when one object is moving


Case2: When two objects are tracked.

This case applies when the user needs to perform a reaction with multiple chemicals in containers or multiple equipment.Tracking and positioning of multiple objects needs to be done which can be complicated and confuse the user. This case helps in observing if the system created is still successful in case the user needs to manipulate multiple equipments to perform complex reactions.

Setup in the system for performing the reaction when two objects are moving


2. The second step involves calculating the error rate when a user is asked to pour particular quantities of chemicals.

The user during actual reactions in the physical lab needs to be precise with the amounts of chemicals which need to be mixed together. This procedure is necessary for evaluation to check the preciseness of the system as we are trying to make it as close to natural as possible by eliminating the shortcomings of the actual physical system.

This step involves calculation of the error. The user is asked to pour four different amounts of liquids using the app and two cases of the system. The amounts asked to pour are: 2ml, 6ml, 4.5ml and 12ml. The amounts need to be poured in one go in the app as well as the system. The calculation of the error rate is done using the formula: \frac{| Amount\ to\ be\ poured\ - Actual\ amount\ poured\ by\ the\ user\ |}{Amount\ to\ be\ poured}

As soon as the user is able to perform the reaction with the slider in the app and tilt the container the liquid starts flowing and this can be seen in the image below:
Pouring of the chemical

The user asked to do the pouring of the same four quantities in the app when one object moves and when two object moves. The time calculated in our case are divided into two parts: positioning time and pouring time. As the pouring quantity in some way is related to the actual system time by a formula and can be changed anytime the formula is changed so the time to position the objects to start pouring is important and needs to be calculated and analysed separately. The formula used to calculate the pouring quantity is: amount + = (double) Time.deltaTime/ * \frac{50}{60}

The amount being poured can be seen in the top right corner as shown in the image below:

Pouring amount seen on the top right corner

3. The last step of the evaluation involves finding out how intuitive is the interaction proposed by us. This is done using a questionnaire taken from ref 4 in the reference section. The questionnaire can be seen in the image below:

Questionnaire for evaluating intuitiveness


Result

In this section we will present the analysis of the evaluation performed. The main results obtained after the analysis are shown below:

1. The first result from the time taken for the reaction in app and our system can be observed below:

Time for the reaction in the app and our system

As we can see from the graph the time needed to perform the chemical reaction is way more in the app than in the two cases of our system for most users. This is due to the fact that the slider technique was difficult for the user to guess and in case of our system none of the users had to think twice in finding the interaction necessary to perform the reaction which was pouring. This proves the intuitiveness of our interaction.

The relation between the time taken for the chemical reaction in the two cases can not be defined as of now as it depends on when the user was able to find the correct position for the objects to start pouring + the amount poured to perform the reaction was random decided by the user as to when they want to stop.

2. The second result we got is from the questionnaire:

The average values for each question are calculated to analyse the answers/ preferences of the users. The table below shows the questions with their average values:

Average value for each question

According to the analysis of the questionnaire we observed the following things overall:

- The user was able to reach the goal effortlessly.

- The user easily knew what to do.

- The user found the interaction intuitive and it came naturally to them.

- The steps involved are easy to remember and can be easily recalled.

3. The comparison of the error rate in app and our cases in the system are shown below

Error rate for the reaction in the app and our system

We can clearly see that the error rate in case of the app is way higher as compared to the two cases of our system. This is due to the fact that the slider is not very precise and especially in the first scenario where the user had to pour 2ml, most of the users could not control the slider motion and ended up pouring way more than necessary quantity. Later the user got used to the amount of movement of the finger needed to pour a certain quantity and thus the error decreases in case of the app. Same can be explained for the system that the error is way less as the user just had to perform the pouring action and then observe the amount and stop at a certain point. The error rate for the second case of movement of two objects is lower that movement of one object because by this time user was comfortable with the system and got used to it and could easily position the containers and start the pouring.

Other analysis of different times involved are discussed below:

4. The graph below shows the positioning time comparison with respect to two cases of our system

Positioning time in two cases of the system

We can clearly see when the user has to interact with just one object the time needed for positioning is less as compared to when the user has to manipulate and position the two moving objects which is obvious. But the overall time still remains less that the app as shown in the above comparison.

5. The time and error rate comparison for the app and two cases of the system

The image below shows the time and error rate in case of the app.

Time and error rate analysis in the app

It can be seen that the error and time for the starting quantities is more and decreases later as the user gets used to handling the slider.

The image below shows the time and error rate in case of the two cases of the system. The image on the left is for one object moving and the image on the right is for two objects moving.

Time and error rate analysis in our system

The comparison shows that the time is decreasing for case 1 and in case 2 the time is a wave. This is because the positioning of the objects is random and may depend on how the system is designed. For an efficiently designed system the positioning will be very easy and the time might decrease in both cases.

The error rate is slightly decreasing but mostly a straight horizontal line as this depends on how used to the system is the user.

The main point to observe form the above two images is the time and error rate in case of the app is higher as compared to both cases of our system.

Conclusion

In a nutshell, we can say that the overall performance of the system is satisfactory. The system is easy to use, safe and will not take take much time in learning. We were able to prove our claim that the interaction proposed by us is intuitive and does not need any major learning to use the system. As the proposed interaction of pouring to mix certain chemicals is as close as possible to the actual gestures done by the user in a physical chemistry lab, this system is easy to use and takes all benefits from the actual physical chemistry lab while masking all the disadvantages of it.

For the further development of the system:

- The user interface can be improved.

- Different chemicals can be added

- More equipments can be added.

- The system can be made capable to perform more complex reactions.


Reference

1. http://thix.co/chemist/

2. https://www.labster.com

3. https://www.scribd.com/document/269816751/A-Design-of-3D-Virtual-Reality-Chemistry-Lab-with-Hand-Gesture-Interaction-for-Education

4. http://intuitiveinteraction.net