Monday, May 9, 2011

CHI ’11: Enhancing the Human Condition



May 9, 2011 8:45 AM PT                                                                      (courtesy: MICROSOFT RESEARCH)





The Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2011), being held May 7-12 in Vancouver, British Columbia, provides a showcase of the latest advances in human-computer interaction (HCI).

“The ongoing challenge,” says Desney S. Tan, CHI 2011 general conference chair and senior researcher at Microsoft Research Redmond, “is to make computing more accessible by integrating technology seamlessly into our everyday tasks, to understand and enhance the human condition like never before.”

Microsoft Research has a consistent record of support for CHI through sponsorships and research contributions. This year, Microsoft researchers authored or co-authored 40 conference papers and notes, approximately 10 percent of the total accepted.

This comes as no surprise to Tan.

“Microsoft Research’s goal,” he says, “is to further the state of the art in computer science and technology. As the realms of human and technology become more and more intertwined, Microsoft Research has focused more and more of our effort at the intersection of human and computer, and this is evident from our researchers’ level of participation.”

One unusual contribution comes from Bill Buxton, Microsoft Research principal researcher. Items from Buxton’s impressive accumulation of interactive devices are on display in an exhibit titled “The Future Revealed in the PastSelections from Bill Buxton’s Collection of Interactive Devices.”

Effects of Community Size and Contact Rate in Synchronous Social Q&A, by Ryen White and Matthew Richardson of Microsoft Research Redmond and Yandong Liu of Carnegie Mellon University, received one of 13 best-paper awards during the conference, as did Your Noise is My Command: Sensing Gestures Using the Body as an Antenna by former Microsoft Research intern Gabe Cohn and visiting faculty member Shwetak Patel, both from the University of Washington, along with Dan Morris and Tan of Microsoft Research Redmond. One of two best-notes awards went to Interactive Generator: A Self-Powered Haptic Feedback Device, co-authored by Akash Badshah, of the Phillips Exeter Academy, a residential high school in Exeter, N.H.; Sidhant Gupta, Cohn, and Patel of the University of Washington; and Nicolas Villar and Steve Hodges of Microsoft Research Cambridge.


The Touch-Sensitive Home


Imagine being freed of physical attachments to input devices because your body isthe input device. One approach is to put sensors on the body. The challenge then is to separate actual “signal” from “noise,” such as ambient electromagnetic interference, which overwhelms sensors and makes signal processing difficult. InYour Noise is My Command: Sensing Gestures Using the Body as an Antenna, the researchers turned the problem on its head.

“Can we use that electrical noise as a source of information about where a user is and what that user is doing?” Morris recalls asking. “These are the first experiments to assess whether this is feasible.”
Human body as antennaThe human body behaves as an antenna in the presence of noise radiated by power lines and appliances. By analyzing this noise, the entire home becomes an interaction surface.

The human body is literally an antenna, picking up signals while moving through the noisy electrical environment of a typical home. The researchers tested whether it is possible to identify signals with enough precision to tell what the user is touching and from where. To measure those signals, the researchers placed a simple sensor on each study participant and recorded the electrical signals collected by those sensors. Laptop computers carried in each person’s backpack collected data as the participants performed a series of “gestures,” such as touching spots on walls and appliances or moving through different rooms.

Next came determining whether analysis of this data provided the ability to distinguish between gestures and locations. It was possible in many cases to recognize participants’ actions based solely on the ambient noise picked up by their bodies. For example, once a participant “taught” the algorithms about the noise environment around a particular light switch by demonstrating gestures around the switch, it was possible to determine which of five spots near that switch the user was touching, with an accuracy of better than 90 percent. Similarly, researchers could identify in which room a participant was present at any given time with an accuracy exceeding 99 percent, because the electrical noise environment of each room is distinct.

“It was quite a gratifying series of results,” Morris says. “Now, we are considering how we can package this up into a real-time, interactive system and what innovative scenarios we can enable when we turn your entire home into a touch-sensitive surface.”


The Patient as Medical Display Surface


Reports from the World Health Organization and the American Medical Association confirm that patient noncompliance is a major obstacle to successful medical outcomes in treatment of chronic conditions. Doctor-patient communication has been identified as one of the most important factors for improving compliance. The paper AnatOnMe: Facilitating Doctor-Patient Communication Using a Projection-Based Handheld Deviceifocuses on understanding how lightweight, handheld projection technologies can be used to enhance doctor-patient communication during face-to-face exchanges in clinical settings.
Body, model, and wall as medical display surfacesThree presentation surfaces: a) body, b) model, and c) wall.

Focusing on physical therapy, co-authors Tao Ni of Virginia Tech—a former Microsoft Research Redmond intern—Amy K. Karlson of Microsoft Research Redmond, and Daniel Wigdor, formerly of Microsoft Research Redmond and now at the University of Toronto, spoke with doctors to understand general communication challenges and design requirements, then built and studied a handheld projection system that flexibly supports the key aspects of information exchange. Doctors can direct handheld projectors at walls or curtains to create an “anywhere” display, or at a patient to overlay useful medical information directly atop the appropriate portion of the anatomy for an augmented-reality view, or “virtual X-ray.”

Reviews and formal lab studies with physical therapists and patients established that handheld projections delivered high value and a more engaging, informative experience than what is traditionally available.

“This is an interesting new space,” Karlson says, “because, despite the prevalence of technology in many medical settings, technology has been relatively absent from face-to-face communication and education opportunities between doctors and patients.

“The coolest part was hearing the positive reactions from study participants when we projected medical imagery directly onto their arms and legs. We got, ‘Wow!’ ‘Cool!’ and ‘I feel like I am looking directly through my skin!’ There seems to be something quite compelling and unique about viewing medical imagery on one’s own body.”


Touch-Free Interactions in the Operating Room


The growth of image-guided procedures in surgical settings has led to an increased need to interact with digital images. In a collaboration with Lancaster University funded by Microsoft Research Connections, Rose Johnson of the Open University in Milton Keynes, U.K.; Kenton O’Hara, Abigail Sellen, and Antonio Criminisi of Microsoft Research Cambridge; and Claire Cousins of Addenbrooke’s Hospital in Cambridge, U.K., address the problem of enabling rich, flexible, but touch-free interaction with patient data in surgical settings. The resulting paper, Exploring the Potential for Touchless Interaction in Image-Guided Interventional Radiologyhas received a CHI 20 Honorable Mention paper award.

During treatments such as interventional radiology, images are critical in guiding surgeons' work; yet because of sterility issues, surgeons must avoid touching input devices such as mice or keyboards. They must navigate digital images "by proxy,” using other members of the surgical team to find the right image, pan, or zoom. This can be onerous and time-consuming.
Complex surgical collaborative environmentThis view toward an X-ray table from a computer area shows a surgical team and the complex collaborative environment that touch-free interactions must address.

The research team began fieldwork with the goal of understanding surgeons' working practices. The researchers are collaborating with surgical teams to develop and evaluate a system. Touchless-interaction solutions such as Kinect for Xbox 360 offer opportunities for surgeons to regain control of navigating through data. There are many challenges, though, in terms of enabling collaborative control of the interface, as well as achieving fluid engagement and disengagement with the system, because the system needs to know which gestures are “for the system” and which are not.

“The most intriguing aspect of this project,” Sellen says, “is the potential to make a real impact on patient care and clinical outcome by reducing the time it takes to do complicated procedures and giving surgeons more control of the data they depend on. From a technical side, it is exciting to see where technologies like Kinect can realize their value outside of the gaming domain.”


Use Both Hands


Touch interfaces are great for impromptu casual interactions, but it is not easy to select a point precisely with your finger or to move an image without rotating it unless there are on-screen menus or handles. In the world of touch, though, such options are not desirable, because they introduce clutter. Rock & RailsExtending Multi-touch Interactions with Shape Gestures to Enable Precise Spatial Manipulations, by Wigdor, Hrvoje Benko of Microsoft Research Redmond, and John Pella, Jarrod Lombardo, and Sarah Williams of Microsoft, proposes a solution by using recognized hand poses on the surface in combination with touch.

“Rock and Rails” is an extension of the touch-interaction vocabulary. It maintains the direct-touch input paradigm but enables users to make fluid, high degree-of-freedom manipulations while simultaneously providing easy mechanisms to increase precision, specify manipulation constraints, and avoid occlusions. The tool set provides mechanisms for positioning, isolating orientation, and scaling operations using system-recognized hand postures, while enabling traditional, simple, direct-touch manipulations.
Augmenting traditional manipulation techniques with recognized hand posturesThe Rock & Rails paper augments a) traditional direct-manipulation gestures with independently recognized hand postures used to restrict manipulations conducted with the other hand: b) rotate, c) resize, and d) 1-D scale. This enables fluid selection of degrees of freedom and, thus, rapid, high-precision manipulation of on-screen content.

The project was a collaborative effort between Microsoft Research and the Microsoft Surface team, so the researchers were able to test their work on real-world designers—the intended audience.

“One of the best moments of the project,” Benko recalls, “was when we realized our gestures could be made ‘persistent’ on the screen. We had transitioned from the model where you had to keep the pose of the hand in order to signal a particular option, to a more relaxed mode where the user could ‘create’ or ‘pin’ a proxy representation of a gesture. This allows users to perform all sorts of wacky combinations of operations without needing to hold the gesture for a long period of time.”

These are just a few of Microsoft Research’s current investigations in how to enhance the ways people can interact with computing devices.

“HCI is all about discovering and inventing technologies that deeply transform people’s lives,” Tan concludes. “Microsoft Research is committed to advancing the state of the art in human-computer interaction.”

Thanx to http://research.microsoft.com/en-us/news/features/chi2011-050911.aspx for this content.


No comments:

Post a Comment