Designing Telepresence Robots for Use by People with Disabilities
Designing Telepresence Robots for Use by People with Disabilities
People with disabilities as telepresence robot operators
A person’s quality of life is impacted when he or she is no longer able to participate in everyday activities with family and friends, which is often the case for people with special needs (e.g., seniors and people with disabilities) who are full time residents at medical and healthcare facilities. We posit that people with special needs may benefit from using telepresence robots to engage in social activities. Telepresence robots provide interactive two-way audio and video communication and can be controlled independently, allowing the person driving to use the robot to look around and explore a remote environment as he or she desires. However, to date, telepresence robots, their user interfaces, and their navigation behaviors have not been designed for use by people with special needs to be the robot operators.
Over the course of three years, we have designed and architected a social telepresence robot research platform based on a VGo Communications’ VGo robot. Our work included designing a new processing and sensor system with three cameras to create a wide field of view, and laser range finder to support autonomous navigation. The images from each camera were combined into a vertical panoramic video stream, which was the foundation of our interface. Since the premise of a telepresence robot is that it is an embodiment for its user, we designed and implemented autonomous navigation behaviors that approximated a human’s as much as possible, given its inability to independently translate laterally.
This research utilized an iterative, bottom-up, user-centered approach, drawing upon our assistive robotics experiences. We have conducted series of user studies to inform the design of an augmented reality style user interface. We conducted two formative evaluations (a focus group (n=5) and a follow-on “Wizard of Oz” experiment (n=12)) to investigate how members of our target population would want to direct a telepresence robot in a remote environment. Based on these studies, we developed an augmented reality user interface, which focuses primarily on the human-human interaction and communication through video, providing appropriate support for semi- autonomous navigation behaviors. We present a case study (n=4), which demonstrates this research as a first critical step towards having our target population take the active role of the telepresence robot operator.
Katherine M. Tsui. PhD Thesis: The Development of Telepresence Robots for People with Disabilities. University of Massachusetts Lowell, Lowell, MA. April 2014. [High res 84MB, low res 14MB]
2014-07-06, Two Places at Once: A Robot's Eye View. Hosted at The Discovery Museum, Acton, MA. Training video (long version):
2011-09-012, Building Hugo
This research has been funded in part by the National Science Foundation (IIS-0546309, IIS-0905228). We would like to thank Tom Ryden of VGo Communications.
Katherine M. Tsui, Eric McCann, Amelia McHugh, Mikhail Medvedev, Holly A. Yanco, David Kontak, and Jill L. Drury. Towards Designing Telepresence Robot Navigation for People with Disabilities. To appear in the International Journal of Intelligent Computing and Cybernetics, Volume 7, Issue 3. Special Issue on Robotic Rehabilitation and Assistive Technologies.
"Hugo (an augmented VGo Communication's telepresence robot) is being remotely driven and being used to walk alongside a colleague, actively participating in a mobile conversation. The driver can be seen on Hugo's screen." Kate Tsui is the robot driver and next to her is Adam Norton. Adam is an educator and designer working in the UMass Lowell Robotics Lab. Photo credit goes to John Fertitta.
"The top half of Hugo (an augmented VGo Communications' VGo telepresence robot) features a light-up LED tie, used to indicate the robot's status. The driver can be seen on Hugo's screen." Photo credit goes to Adam Norton.
Office Video-Conferencing Robots
Office Video-Conferencing Robots
Telepresence robots at Google in Mountain View, CA (2010)
"For the last time dude, everyone thinks your robot is cool, but you don't need to talk to us through it when you’re in the next room."
Commercial telepresence robots can be thought of as embodied video conferencing on wheels. Several companies now produce and sell telepresence robots for the purposes of providing interactive 2-way audio and video. Additionally, these teleprensece robots’ mobility provides the operator the means to explore as he/she desires. The companies have envisioned telepresence robots being used for a variety of applications such has as having ad-hoc office conversations, conducting patient rounds in hospitals, and touring manufacturing facilities.
During Summer 2010, we conducted a series of user studies of two telepresence robots (Anybots' QB and VGo Communications' VGo) in an office environment at Google in Mountain View, CA. One study focused on virtual teams in which a remote teammate (n=6) used a telepresence robot to attend his/her regularly scheduled team meetings. We found that people who used to be in the same building as their teammates and then moved to a different location had the best experiences recreating this closeness with their teams using telepresence robots.
This research has been funded in part by the National Science Foundation (IIS-0546309, IIS-0905228). We would like to thank Dr. Chris Uhlik of Google, and Anybots and VGo Communications for loaning us prototype robots.
Munjal Desai, Katherine M. Tsui, Holly A. Yanco, and Chris Uhlik. Essential Features of Telepresence Robots. Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, April 2011.
Visual control interface of a wheelchair-mounted robot arm for cognitively impaired wheelchair users (2006-2009)
Activities of daily life, such as picking up a telephone or drinking a cup of coffee, are taken for granted by most people. Typically, people with severe physical handicaps have a dedicated caregiver to help them, but more independence may be desired in personal activities. Personal fixed-base robot arms can assist with daily activities, but most of these devices are engineered for specific environments. While they succeed at their predefined tasks, they fail in the real world.
The Exact Dynamics' Manus ARM, a 6+2 degree of freedom wheelchair mounted robot arm, is able to function in unstructured environments. However, it is awkwardly controlled through a menu hierarchy using a keypad, joystick, or single switch. These controls require a high level of cognitive capability and may not correlate well to the user's physical and cognitive abilities.
Our research investigates control of a wheelchair-mounted robot arm using native encoder values supplemented with two stereo vision camera systems (one on the "shoulder" and the other on the "wrist"). Our vision-based system draws inspiration from people's innate abilities to see and touch like a pointing gesture to indicate "I want that."
Because the user is collocated with the robot arm, his/her view is the same as the shoulder camera's. The user selects the desired object from the shoulder camera's view using a touchscreen, mouse-emulating joystick, or single switch. The system then retrieves the object and returns it to the user.
This research was conducted in conjunction with Dr. Aman Behal of the University of Central Florida, and David Kontak OTR/L of Crotched Mountain Rehabilitation Center. Funding provided in part by the National Science Foundation (IIS-0534364, IIS-0546309, IIS-0649736).
Katherine M. Tsui, Holly Yanco, David Feil-Seifer, and Maja Mataric. Methods for Evaluating Assistive Robotic Technology. Performance Evaluation and Benchmarking of Intelligent Systems. Edited by Raj Madhavan, Edward Tunstel, and Elena Messina. Springer, 2009.
"If you can't measure it, you can't claim it." ~Terry Fong, August 2010
In addition to my research projects, I have spent a lot of time focusing on how their success can be measured. This means 1.) formulating hypotheses, 2.) finding the appropriate performance measures to uphold or disprove the hypotheses, 3.) developing and running human-subjects experiments, and 4.) performing statistical analysis of the data to understand if the hypotheses are upheld.
Katherine M. Tsui, Munjal Desai, and Holly A. Yanco. Towards Measuring the Quality of Interaction: Communication through Telepresence Robots. Proceedings of the Performance Metrics for Intelligent Systems Workshop (PerMIS), College Park, Maryland, March 20-22, 2012. [PRESENTATION]
Katherine M. Tsui, Munjal Desai, Holly A. Yanco, Henriette Cramer, and Nicander Kemper. Measuring Attitudes Towards Telepresence Robots. International Journal of Intelligent Control and Systems, Special Issue on Quantifying the Performance of Intelligent Systems. Volume 16, Number 2, June 2011.
Katherine M. Tsui, Kareem Abu-Zahra, Renato Casipe, Jason M'Sadoques, and Jill L. Drury. Developing Heuristics for Assistive Robotics. Late-breaking paper at the 5th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, March 3-5, 2010.
Katherine M. Tsui, Holly Yanco, David Feil-Seifer, and Maja Mataric. Methods for Evaluating Assistive Robotic Technology. Chapter 3 of Performance Evaluation and Benchmarking of Intelligent Systems, pp 41-56. Edited by Raj Madhavan, Edward Tunstel, and Elena Messina. Springer, 2009.
Kate Tsui is an Assistive Robotics Researcher and a Postdoctoral Associate at Yale University under Dr. Brian Scassellati. From 2001 through 2006, Kate worked for Sun Microsystems in several software engineering roles, including development and quality assurance. In 2004, she graduated from the University of Massachusetts Lowell with her BS in computer science. In Fall 2006 and Spring 2007, Kate served as the teaching assistant for 91.301 Organization of Programming Languages; she was a guest lecturer for this class. Kate interned at Yale University in 2008 and Google in 2010. She received her MS in computer science in 2008, HCI certification in 2010, and PhD in computer science in 2014, all from UMass Lowell under Dr. Holly Yanco.
Kate specializes in robotics and human-robot interaction as an assistive technology researcher. Her research interests stem from the cross section of computer science, robotics, assistive technology, human-robot interaction, and human-computer interaction. Kate is passionate about increasing the quality of life for people outlying the general populace using assistive robotic devices. Over the last 7 years, she has worked with clinicians and end-users from several special populations, including children with Autism Spectrum Disorder, teenagers and young adults with Cerebral Palsy, and adults and seniors with Brain Injury.
In her role at UMass Lowell's Robotics Lab, she and her collaborators at the University of Central Florida developed vision-based control of a wheelchair-mounted robotic arm for users with cognitive impairments to pick up an object (2006 through 2009). Kate’s dissertation research focused on developing telepresence robots for people who have disabilities and are socially removed from their families and friends due to medical reasons. From 2010 through 2014, she worked towards the vision of this target population as the telepresence robot operators. The robot would be located in a remote environment (like an art gallery), and the people with disabilities would be able to visit at their choosing, as if they were there.
Kate has been involved with increasing participation in STEM (science, technology, engineering, and math) fields through educational outreach to K-16 students. Additionally, she founded the UMass Lowell Women in Computer Science group in September 2007 and served as its president through June 2009.
Computer Software | Greater New York City Area, US
2014 - Present
Postdoctoral Associate / Yale University
Research Assistant / University of Massachusetts Lowell
Research focus on visual control of a robotic arm for people who use power wheelchair and have cognitive impairments, adaptive user prompting, and increasing the awareness of medical and healthcare professionals with respect to assistive and rehabilitation robotic technologies.
Project directed by Dr. Chris Uhlik of Engineering Research.
Visiting Research Assistant / Yale University
Social Robotics Laboratory, Dr. Brian Scassellati
Evaluate Pleo, Ugobe's dinosaur robot, as platform for therapy for children with autism spectrum disorders.
MTS-1 / Sun Microsystems
Development. Implement change requests for Enterprise Storage Manager Base Applications 4.0 installer. Incorporate feature upgrades to existing lab management system for division-wide deployment.
Intern / Sun Microsystems
Quality Assurance. Automate regression testing for Availability Suite using home-grown Object-Oriented Perl harness.
Intern / Sun Microsystems
Lab Staff in NetWork Storage division. Maintain of servers and storage. Network administration of seven class C subnets. Coordinate site transfer for lab consolidation. Interview and train additional lab staff. Backup code repository.
Quality Assurance. Manual tester for Availability Suite 3.2.
University of Massachusetts at Lowell
Computer Science, Robotics
Activities: Women in Computer Science, Founder (2007) and President (2008)