Thursday, November 3, 2011

Paper Reading #27: Sensing Cognitive Multitasking for a Brain-Based Adaptive User Interface

Erin Reacy Solovey, Francine Lalooses, Krysta Chauncey, Douglas Weaver, Margarita Parasi, Matthias Scheutz and Rober J.K. Jacob represent the students and faculty of Tufts University in the Computer Science department.
Angelo Sassaroli and Sergio Fantini represent the Biomedical Engineering department of Tufts University.
Paul Schermerhorn represents Indiana University and their Cognitive Science school.
Audrey Girouard represents Queen's University and their School of Computing.

This paper was presented at CHI 2011.

Summary


Hypothesis
In this paper, researchers set out to prove that it was possible to detect multitasking contexts and to react to these changes.

Methods
The researchers first labelled three different scenarios that describe the brain's ability to multitask. First is branching. Branching is a state where the primary goal still requires attention but the secondary goal must be attended to as well, thus the user must keep certain information regarding the primary goal active in their mind. The second type is dual-task. This is where the user must make quick changes between the primary task and the secondary task regularly. The final type is delay. Delay occurs where a secondary task is ignored when the user is working on the primary task.

Researchers main goal was to determine whether or not they could actually detect when a user's mind was in a certain context.

To test this, they created an experimental task which required a human and a robot to work together over a given task. The robot simulated a Mars rover on an exploration mission. The rover would give status updates about a newly found rock or a new location change. The participant would then have to respond to the robot with some command. There were three separate test runs that each tested a different multitasking mode.

They also conducted a proof-of-concept user interface test where the participant was given a task and based on their multitasking mode, the robot would have different behaviors and interact with the human differently.

Results


Through analyzing the first experiment, researchers were able to extrapolate the way hemoglobin levels worked to determine the three various multitasking states. Below shows the three graphs of the multitasking states:


Since these levels are different, that means that multi-tasking modes can indeed be discovered and the multitasking mode can be predicted in the users mind.

Results

Again, a very fascinating paper on Brain-Computer interfaces. There are many potential uses for a system like this. One system that I'd find very useful is an application that could help keep a user focus during tasks that require concentration. For example, writing a paper or coding. So, if I was writing some code and needed to focus, this BCI would detect I'm in the delay multi-tasking mode and would therefore block out certain notifications that my computer currently sends me (new emails, new facebook notifications etc).

No comments:

Post a Comment