Andrew D. Wilson is a senior researcher at Microsoft Research who also co-authored another paper that has been read for this class.
Hrvoje Benko is also a researcher at Microsoft Research who focuses on Human-Computer Interaction
This paper was presented at UIST 2010.
Summary
Hypothesis
The hypothesis in this paper is that it's possible to use multiple depth cameras and projectors to interact with an entire physical room space.
Methods
To prove that their hypothesis is feasible, the researcherts implemented various different components using the depth camera and multiple projectors.
The first component they discussed was their simulated interactive surfaces. They wanted any surface (like a table or a wall) to be come interactive by projecting data and objects on that surface. The surface could be interacted with through movements captured by the depth cameras.
The next component was interactions between the users body and the surface. One of these interactions consisted of the user touching one object on one screen then touching a location on the other surface. Completing this action would cause the selected object to be moved across the two locations. Another interaction they added was allowing users to pick up objects from a surface. By picking up an object, a orange "ball" would appear on the users hand and allow them to transfer the object to another surface.
The final component they added was spatial menus. A user can hold his hand over the menu for a few seconds to activate it. The menu is then projected on to his or her hand.
While there have been systems similar to LightSpace, none have combined all the features that the researchers discussed including the usage of multiple depth cameras.
Their full test consisted of allowing users to interact with the LightSpace prototype at a demo event.
Results
They found that there were multiple occassions that LightSpace might fail. One being too many users in the space causing a slowdown or interactions to not be handled well. Also, some interactions were found to fail due to a user accidentally covering their hand or body with their head.
Some users even developed new ways to interact with the LightSpace system.
Discussion
Systems like LightSpace, in my opinion, are part of the future. Being able to interact with objects in a full 3D space is something like what you see in a science fiction movie. However, I feel like there are many advances the researchers could add to make the system even better.
One is a better tracking system. They mentioned in the paper that they left out 3D hand tracking. There are new algorithms being released which allows for easy quick 3D hand tracking. These algorithms could be added in to allow easier and more efficient tracking for even better results.
I think for a system like this to catch on, camera and projection techniques will also have to improve. When a user gets in the way of the camera or projection, the interaction between the user and system is disrupted.
No comments:
Post a Comment