Thursday, September 8, 2011

Paper Reading #5: A Framework for Robust and Flexible Handling of Inputs with Uncertainty

A Framework for Robust and Flexible Handling of Inputs with Uncertainty


Julia Schwarz, Scott E. Hudson, Jennifer Mankoff, Andrew D. Wilson

Julia Schwarz is a computer science PhD student at Carnegie Mellon University focusing on Human-Computer Interaction.
Scott E. Hudson is a Human-Computer Interaction professor at Carnegie Mellon University.
Jennifer Mankoff is a Human-Computer Interaction professor at Carnegie Mellon University.
Andrew D. Wilson is a researcher at Microsoft who also researched the topic of "Pen + Touch = New Tools."

Summary


In this paper, researchers detailed a new system that would allow for a new kind of input that had some sort of uncertainty factor. As touch screens, gestures, and other "uncertain" inputs develop, there is a  need for some kind of system that accurately predicts what the user means to do. In this system, a dispatcher will sent an event notification to the interactors that have a high selection probability. Based on the selection score, interactors will complete their given action. Sometimes multiple interactors will work at once. When that happens, the possible actions are all sent to the mediator who decides either to run both the actions or to choose one of them to run.

An example they used was three tiny buttons and a user's touch input. The user's touch was mostly over two buttons but one of the buttons was disabled therefore it had a selection score of 0 (the interactor wouldn't fire) and the middle button was correctly activated. They had many other examples that were similar.

One way they tested their system was a case study involving users whose motor skills were impaired. In this test, they had users use a system with normal click events and found that users missed their targets approximately 14% of the time. However, using the probabilistic system, researchers found that the users missed their targets only two times combined.



Discussion


The creation of a system that uses probabilistic input is without a doubt needed and in some cases being used already in consumer products. For example, the Apple iOS on-screen keyboard uses a probabilistic model of what key the user is hit based on touch location but also the number of words that have that letter in it (letters that are used more frequently might have a higher selection score based on previously typed letters).

I definitely believe the researchers proved the hypothesis but not only providing convincing examples but also by using an interesting case study. The wrong input was received only twice with the motor impaired participants. Not only does this have some great ramifications for those who aren't motor impaired, but it also has the potential to bring a better, more reliable system to those who have some motor impairment.

Paper Reading #4: Gestalt, Integrated Support for Implementation and Analysis in Machine Learning

Gestalt: Integrated Support for Implementation and Analysis in Machine Learning


Kayur Patel, Naomi Bancroft, Steven M. Drucker, James Fogarty, Andrew J. Ko, James A. Landay.

Kayur Patel is currently a computer science PhD student at the University of Washington.
Naomi Bancroft is currently a computer science undergraduate student at the University of Washington.
Steven M. Drucker is a researcher at Microsoft who focuses in Human Computer Interaction.
James Fogarty is currently an assistant professor of computer science and engineering at the University of Washington.
Andrew J. Ko is an assistant professor of the Information School at the University of Washington.
James A. Landay is also a professor of computer science and engineering at the University of Washington.

This paper was presented at UIST 2010.

Summary


This paper details the system known as Gestalt. Gestalt is a novel and unique system to allow developers to incorporate and work with machine learning development. Normally, developers stray from machine learning (although it can be extremely helpful) due to the fact that testing and creation can be difficult.
The researchers claimed that Gestalt "allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis." Gestalt is meant to be used as tool to aid developers in the creation of applications that use machine learning.

One of the main points the developers discussed was Gestalt's ability to work with many different types of data sets. They used two different examples throughout the paper: the analyzation of a movie review and the deciphering of a gesture mark. Through a set of simple API's, a developer could quickly and easily plug in their specific code to handle the specific data set of the problem that needs to be handled.
Gestalt will even allow developers to quickly analyze input data. This can be used, for example, to determine why a certain gesture is failing to be recognized.

The researchers believed that, through Gestalt, developers would be better able to find bugs and fix them quickly. To test Gestalt, they had users attempt to locate bugs in the machine learning pipeline. They had two different systems: a baseline system (simple system that executed scripts) and the Gestalt system.
The study found that users found and fixed errors in the machine learning pipeline much easier and more efficiently in the Gestalt system than the baseline.


Discussion


While this was certainly one of the more complicated papers so far, Gestalt was definitely an interesting system.
The idea of machine learning is something very interesting and very useful for many different types of systems. With a system like Gestalt, it would allow developers to become far more creative with some of their software implementations (implementing machine learning as opposed to some rigid set of error checking code, for example).

While testing a software development platform is going to be difficult, I wish they had included more information on or included a study about actually developing a full complete system on Gestalt (as opposed to tests limited to bug hunting).

Monday, September 5, 2011

Paper Reading #3: Pen + Touch = New Tools

Pen + Touch = New Tools


Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, Bill Buxton.

Ken Hinckley is a researcher currently employed at Microsoft.
Koji Yatani is a graduate student working toward his PhD at the University of Toronto.
Michel Pahud is currently a senior researcher employed at Microsoft.
Nicole Coddington is a senior interaction designer at HTC who was previously employed at Microsoft.
Jenny Rodenhouse is currently employed at Microsoft as an Experience Design for Xbox system but previously worked as an Experience Design for the mobile division.
Andy Wilson is a senior researcher at Microsoft who focuses on Human Computer Interaction.
Hrvoje Benko works at Microsoft as a researcher and focusing on Adaptive Systems and Interaction.
Bill Buxton is a Principal Researcher at Microsoft in Toronto.

This paper was presented at the UIST 2010.

Summary


In this paper, Microsoft researchers investigated the using of pen and paper and how it could possibly apply to a digital device with the addition of touch. In the study, they observed people interacting with physical pages of paper and notebooks in various ways and developed an interface to allow for similar interaction. Their main idea with this interface was to allow the user to write with the pen but still interact by touch. The defining characteristic, however, is that using pen and touch together allows for new tools or features.

The first part of the paper discussed their design study with a physical paper notebook. They had people write in a notebook, cut clippings, paste the clippings and complete other various tasks. They noticed several common trends with this physical interaction including specific roles people assigned to the pen and objects, the fact that people would hold clippings temporarily, holding pages while flipping, and several others. With this in mind, they attempted to design their interface to allow for users to interact in a similar manner.

They used a Microsoft Surface device for the physical interface. Their idea with the interface was that "the pen writes and touch manipulates." Touching on the interface could move objects around or zoom in on them. Using pen as well as touch allowed them to add new tools. For example, the stapler allowed items to be grouped into stacks by selecting items then touching them all. They added others such as the X-action knife, Carbon Copy and amore.

Participants overall enjoyed the experience of the touch plus gesture. Many of the users found some of the interactions as natural and would use it without even thinking. The problem area came from special "designed" gestures where users wouldn't be able to intuitively figure it out. The researchers believed that this interface that used pen combined with touch was a success.

Discussion


The concept of using pen and touch to create a system that mimics natural physical interaction with paper is very interesting. One of the main barriers to using a tablet or laptop device to take notes is the difficult level of translating thoughts or drawings to either font on the computer or scribbles on a tablet. A scrapbook or notebook program like what they created would be very helpful and would probably do well in the market. Functional note taking applications are already extremely popular. Adding new natural gestures would increase popularity even more.

The main fault of this paper would probably be that the details of the case study weren't very evident. It would have been helpful to understand in greater detail how users interacted with the device. They researchers claimed their device fulfilled its goal but they never discussed in detail the responses to the device.

Thursday, September 1, 2011

Paper Reading #2: Hands-On Math


Hands-On Math: A page-based multi-touch and pen desktop for technical work and problem solving. 

Robert Zeleznik, Andrew Bragdon, Ferdi Adeputra, Hsu-Sheng Ko

Robert Zeleznik is currently the director of research in the Computer Graphics Group at Brown University.
Andrew Bragdon is a PhD student studying Computer Science at Brown University.
Ferdi Adeputra is a student at Brown University studying Computer Science.
Hsu-Sheng Ko is a student at Brown University studying Computer Science.

This paper was presented at the UIST 2010.

Summary

In this paper, researchers attempted to create a new way to allow users to interact with math problems without using paper, a white board, or some kind of unintuitive computer program. Their hypothesis was that it is possible to create a device that combines the intuitive and free-flow usage of paper with the computer-aided assistance of a CAS (Computer Algebra System). In order to great an interface like this, the researchers implemented several different components.

 The first they discussed was the Page Management. It allows users to quickly delete and create new pages to draw on intuitively. The next was the Panning Bar. Through a gesture, the Panning Bar allowed users to quickly sift through created pages. Folding was another useful feature. It allowed users to quickly hide or re-open large blocks of text and/or mathematical equations.

Gestures also played a big part in the interface. Under-the-rock menus were implemented where they weren't initially visible till a user wanted to perform an operation on a figure. Touch-Activated Pen Gestures (or TAP gestures) allowed the user to use the light pen in combination with their hand to signal more complex, yet intuitive, gestures. PalmPrints was created so that a user's idle hand could control various operations with the tap of a finger. The FingerPose component allowed the system to differentiate between the tip of a finger or the pad of a finger.

The Math components were also very interesting. Users can select individual numbers or even full terms. Then, using gestures, they could perform complex mathematical operations quickly like factoring by pulling numbers apart.

They tested their hypothesis and the viability of their system by allowing students from the university come in and try the system. They had them perform various tasks from creating pages to graphing an equation.

They found that most people were able to pick up the system very quickly. With some of the gestures, the participants had to be prompted to either complete an action a different way or perform the gesture differently. Once they got the hang of it, though, the interface seemed to be easy to use. The researchers confirmed their hypothesis and concluded that a more robust version of Hands-On Math would be a useful tool.

Discussion


In my opinion, the researchers were really on to something with their idea of creating a math system that combines the benefits of paper and computer assistance. They certainly had some very creative and intuitive ideas that they attempted to implement.

My main concern is the fact that they seemed to be missing some aspects that might have been helpful in the demo. First, it seemed like there were several glitches that they had to resolve during the tests. Second, perhaps a small on-screen tutorial or demo would have been helpful. It seemed like many people were trying a gesture or movement in an awkward manner. This could've been fixed by allowing them to watch some sort of demo video before they started.

I could definitely see a piece of technology like this become popular in the future. I'd love to be a user of such a system. 

Wednesday, August 31, 2011

Paper Reading #1: Imaginary Interfaces, Spatial Interaction with Empty Hands and without Visual Feedback


Imaginary Interfaces: Spatial Interaction with Empty Hands and without Visual Feedback

Sean Gustafson, Daniel Bierwirth and Patrick Baudisch

Sean Gustafson is a PhD student studying at the Hasso Plattner Institute in Germany.
Daniel Bierwirth is a cofounder of mobile software company. He's holds a master's degree in IT-Systems Engineering from Hasso Plattner Institute.
Patrick Baudisch is a professor at the Hasso Plattner Institute focusing on Human Computer Interaction.

This paper was presented at the UIST 2010.

Summary

In the paper, the researchers attempted to push the bounds of normal interfaces by attempting to create an interface that relied on the user's short-term memory. This imaginary interface forced the user to attempt to remember or figure out the interactive space. Unlike other gesture or spatial interaction tests, this attempt gives the user no visual output. Their main hypothesis was that the visual memory of a human would be enough to provide a feasible interface. 
They provided three different user studies:

1) The user was asked to complete three different drawing tasks. Graffiti characters, repeated drawings, and multi-stroke drawing. After showing the user what to draw, the testers had the participant copy the drawings. Through this test, participants were able to build up their "visuospatial" memory because they were watching their hands. The results provided showed that with all three tests, the participant was able to recreate the drawings.

2) The participant was asked to draw a shape, rotate, and attempt to point to the vertex of the shape they had just drawn. They found that the user was able to find the vertex again when they used their other hand in an L-shape to be a reference point, even with rotation.

3) The third test featured the participant attempting to locate a given coordinate using their left hand in an L-shape as a reference point. The test showed that people had a harder time attempting to locate the point as the point was farther away from their reference point. 

After these results, they concluded that using a device like this would require users to make gestures while their short term memory still had the mental image. They also concluded that using a mobile reference point greatly helped the accuracy of selecting or making annotations. 
They also hoped to pursue imaginary interfaces even farther by attempting to have the device work with user speech as well. 

Discussion

I believe the researchers certainly demonstrated effectively that an "imaginary interface" could provide viable input. Most of the users input was very close to correctness (choosing the right coordinate or drawing the correct shape). One important point that the article failed to address was the possibility of the need for negative feedback. For example, if a user tries to use a certain gesture the device might either a) interpret the gesture as something different or b) throw an error.

In the first case, because there is no interface except through the "imagination", the user could be doing something they don't mean to do. In the second case, the device wouldn't really have a way to alert the user of something that went wrong. 

This paper is interesting though because it opens up an entire realm of human interaction that I've never thought about. Using purely the human short term memory, the interface exists only in the mind. Something that I'd find fascinating is the potential usage of an interface like this combined with another well used interface. The article discussed perhaps linking this imaginary interface with a cellphone. I think that would be a perfect application. Small mobile interfaces severely limit what one can do but with a gesture interface like the one described in the paper, so many more interaction methods are possible. 

On Computers

Aristotle's writing On Plants detailed not only why Aristotle believed plants to have a soul but also enumerated the purposes and various characteristics that make up plants. Through this writing, in many respects, it's easy to draw comparisons between plants and computers. For one, like plants, there are many, many different types of computers. Secondly, Aristotle declares that life is not immediately evident in plants. Similarly, life in computers isn't necessarily immediately evident but computers are extremely complex and "smart" machines. The origin of life and a soul is a glaring difference between the two, however.



Aristotle was known for not only being a philosopher but also for his application of science. In this writing, he steps through, in a very scientific method, the various traits of plants and what causes motion and life within said plants. He theorizes about everything from why fruit is developed like it is to why certain dirt is helpful to certain plants. He enumerates (almost) every type of plant that fits into the various niches. A similar practice can actually be done with computers as well. We have computers which are specifically designed for a home user. We have computers that are designed for graphic designers. We have computers for students. Cell phones are computers. My coffee maker has a computer. These computers, like plants, are all very different from one another yet they're all lumped together into one category. They all have various different purposes and are all vastly designed differently but they are all connected together in an artificial family tree.

The discussion of life is another way in which plants and computers are similar. Aristotle quickly classifies plants as alive and yet the modern human race is extremely hesitant to call computers alive. Computers perform extremely complex operations, perhaps even more complex than the operations that plants perform. While computers may not necessarily physically grow, they are indeed capable of generating more data. Perhaps the glaring difference between plant life and computer "life", however, is the computer's glaring inability to reproduce. Computers are able to reproduce and duplicate data but are unable to transpose their physical makeup like plants.

The discussion of the soul is perhaps one of the most interesting arguments that Aristotle presents. The idea whether or not computers (or plants) have a soul is a very large question of definitions. If something has a soul because it has life, then a deeper investigation on the computer's life must be held since, by that definition, a computer might indeed have a soul.

Tuesday, August 30, 2011

About me (Blog #-1)

About moi:
  • poffdeluxe@tamu.edu
  • 3rd Year Junior
  • I'm taking this class to begin to form an understanding regarding efficient and interesting ways for humans to interact with computers.
  • I expect to either be running my own software related start-up or working for a successful company in a software engineering position.
  • The next big technological advancement will hopefully be an elegant replacement for the standard keyboard and mouse. I envision some sort of holographic touch technology sometime in the future.
  • If I could travel back in time, I'd probably like to meeting Socrates or Plato just to understand their line of reason and to better understand the social climate they came from. 
  • My favorite shoes are probably my Chaco sandals. They're heavy duty and sturdy for long walks or hikes yet they're sandals so they let my feet breathe. 
  • Someday, I hope to learn German. My ancestors were German and I find German culture fascinating.
  • While my first name is technically "Bryant", I greatly prefer being called "Poff."