From 2017 to 2019, I conducted an exploratory study of interaction design for a socially assistive robot that supports people suffering from developmental disabilities. While exploring the impacts of visual elements to robot’s visual interface and different aspects of robot’s social dimension, I developed a series of prototypes and tested them through several user studies that included three residents with various cognitive capabilities at a local group home for people with developmental disabilities.
Presented here is a condensed brief about this project showing my design strategies and research methods. This is also representative of my general research workflow.
* Due to confidentiality, I have altered and omitted certain content, the majority of which are screenshots.
Social robots have been developed to improve the quality of human life in recent years. In this work, I assess the effect of socially assistive robots’ roles, functions, and communication approaches in the context of a social agent providing service or companionship to users with developmental disabilities.
Our research team is trying to develop a new type of sociable robot – socially assistive robots (SAR), whose characteristics are different from traditional robots. They are expected to have certain social intelligence to be capable of more than companionship with individuals and cognitive assistance, and thus to become a functioning part of the environment and community. As an HCI researcher in this team, I aim to design effective interaction for the robot to be capable of providing affective support, in addition to the functions as mentioned above. This project is a joint effort of DDA (Developmental Disabilities Association) and JDQ Systems Inc.
I was the only UX researcher & designer in a team of four. I was responsible for the design of GUI prototypes, experience strategy and the social interaction models. I collaborated with other post-doc researchers in charge of engineering design and software development. I led the UX design and research under the supervision of the CEO of JDQ Systems Inc and my academic supervisor. I produced UX deliverables (including GUI demo, games, videos, etc), conducted on-site interviews and usability tests. I reported directly to the CEO of the company and reiterated my design after weekly scrum meetings.
We do not know how to design technical interactions for people with DD.
We are looking at a very specific use group who exhibit symptoms identical or similar to diseases like dementia and Alzheimer’s and have distinct perception and cognition from the general population. This user group is unfamiliar to us, as we know little about their needs, behaviour patterns, function levels, daily lives, etc.
We do not know how to optimize the social interaction.
For social robotics, there are multiple aspects that we can choose to start with, including verbal communication, tangible interaction, visualization of information, etc. Apparently, we are unable to cover all of them in depth. Besides, our participant demographics may pose limitations to the factors based on which we can design user studies.
We do not know how to leverage robots’ unique features to increase users’ engagement.
Besides the physical form of SARs, robots’ mobility is another critical feature that makes them distinct from devices like computers and tablets. We want to bring more trust and acceptance to the social interaction with residents and we are not sure if the use of motion would introduce a more positive experience.
Design GUI prototypes to identify residents’ challenges and needs.
I designed GUI prototypes based on caregivers’ advice and feedback, and observed how residents behaved and reacted to our design. I selected and tested several fundamental design principles through prototypes, and then examined the effects and desired design of each aspect selected. After studying the effect of each visual or interactive factor, I drafted preliminary propositions for designing robust HRI for users with DD.
Investigate how residents experience the technical interactions that we designed.
HRI is far beyond a display on a mobile humanoid agent. The essence of HRI, in the context of service or companionship, is to imitate a relation or interaction process that is similar to interpersonal Interaction. Hence, I compared the effects of different types of interaction. explored the social dimension of HRI, and identified lessons that we can learn from residents and caregivers’ interaction.
Explore how motion and social distance affect trust and acceptance in HRI.
We were curious about how space and the physical representation of the robot impacts users’ experience. We learned that trust is a critical factor in HRI. However, the development of acceptance and trust is a long process that is beyond the scope of our study, so we aimed to explore the impetus and deterrents to engagement in HRI in the context of proxemics
Developmental disabilities (DD) are a set of permanent and severe problems, whose conditions arise during the developmental period, impacting patients' day-to-day functioning, especially in physical, learning, language, or behaviour areas, and could persist throughout an individual’s lifetime.
Social robots are a type of intelligent agents with certain degrees of autonomy. They are developed to imitate social behaviours to provide users with service, assistance, or companionship. Socially assistive robots are a more specific type of social robots whose primary aim is to provide assistance to users.
One of our research goals requires us to understand the communication patterns of people with different degrees of developmental disabilities. To achieve these goals, we needed to collect data through interviews or literature reviews before starting the design process.
We then drafted several designs of the GUI and either tested them or inquire about them in more interviews with caregivers. We did not only test the visual design, but also aspects including sound, proximity, and overall user experience (UX). We conducted three user studies in one year, exploring the foregoing facets of interaction design.
For participatory design, we need to learn about users’ experience and participants’ feedback. Through the iterative process, we modified or recreated designs based on the results of user studies. The concepts and principles I obtained in the course of identifying the foregoing are critical to the interaction design of socially assistive robots.
In the following section, I illustrate that process that I applied these findings and ideas to my design, through user studies.
Focus Groups & Interviews
Our study started with an investigation on how caregivers communicate with their residents. Focus groups were used as one of the primary research methods in this study. Through this process, we analyzed the interaction patterns of both sides. The results obtained from this pretest study helped us with our research and design. We gained an insight into our users’ needs and cooperated with caregivers to find practical solutions to the challenges that both users and caregivers are facing every day.
We also conducted open-ended interview included 5 interviewees in total, including three caregivers, one office manager, and an external supervisor and counsellor of this project. The interview was semi-structured. A general outline of interview questions was used, but some other questions were generated spontaneously during the interview and they were recorded as well. The interview lasted about 50 minutes. During the interview, I took notes and highlight the vital points that I got from the interview.
Creating the User Persona
We drafted a user persona based on literature review and on-site interviews. Developmental disabilities appear among all racial and ethnic groups and encompass intellectual disability, vision impairment hearing loss, learning disability, cerebral palsy, autism spectrum disorder (ASD), and Attention-Deficit / Hyperactivity Disorder (ADHD).
It was estimated that in the United States there were 7 to 8 million people, or 3 percent of the population, having a developmental disability in 2005 (Ward, Nichols, & Freedman, 2010). In Canada, 1.1% of the national population aged 15 years and over have developmental disabilities as of 2017 (S. Morris et al., 2018).
For robots that are competent to function and engage users, there are four indispensable prerequisites: self-awareness, self-reliance, communication, and adaptability.. A robot needs to be able to recognize different scenarios and have fallbacks in the event of reaching certain limitations. Each phase of the entire operation process needs to be well-thought-out as an assurance of safety and stability. We have determined the following scenarios to consider in the user studies. We started from two simple, but the most common scenarios: reminding and notifying users of upcoming events and activities, and giving them prompts.
Design Ideation & Pilot Studies
We drafted a few simple GUI prototypes for A/B tests to know users' preferences on key design considerations. We conducted pilot studies and assessed their memorization of certain tasks displayed on the GUI. This approach was inspired by two established tests: Nonverbal Medical Symptom Validity Test (NV-MSVT) and Story Recall Test (Green, 2008). NV-MSVT, without any textual information on the screen, contains only images and takes 5 minutes to run.
In the interviews, caregivers pointed out that residents with higher cognitive capabilities were more likely to be attracted by animated graphic elements, and they could still under- stand the visual information. Caregivers were concerned that people with limited cognitive abilities, by contrast, might find the movements or transitions of elements disturbing and confusing. For them, reading a static image had already been a challenge. For this reason, we need to categorize users into two groups again based on their function levels, as we did for symbols.
The layout of the GUI needs to be simplified further so that it would not cause any confusion or distraction for residents. Even a basic layout with two simple elements, the character (face) and the activity/event symbol, can create extra information for residents to perceive and process. Therefore, ideally, there should be only one element displayed at the same time. For residents with higher cognitive function levels, having two elements at the same time can also be acceptable as the transition between the character and other symbols can be smoother.
To further assess the impacts of design elements on HRI, we developed an additional GUI prototype to provide board game instructions. We controlled the robot to approach the resident and stay about 10 inches from her. A Melissa & Doug shape sequence sorting set was jumbled and prepared on the table in front of the user. This simple educational toy was initially designed for children of ages 3- to 7-years old to improve their reasoning and math skills. As this GUI demo showed step-by-step instructions to the game, it would be possible to evaluate the effectiveness of HRI based on objective metrics such as completion results. We can also investigate which part of the design has flaws according to the “roadblocks” that the user encounters.
We implemented the GUI demo on the robot, and controlled the robot to slowly approach residents using an Xbox Adaptive Controller. We adjusted the rotation of Aether to make sure that it would be facing directly to participants. The demo was designed to start shortly after we finished these adjustments.
First, we conducted a user study (User Study A) to explore visual aspects of HRI through GUI. Aside from visual communication, I also investigated other facets such as verbal and physical interactions in User Study B, in which I further investigated the social dimensions of HRI in depth.
These user studies incorporated several qualitative and observational methods common in interaction design, including staff interviews, resident observations, and prototype test to quickly simulate different types of robot interactions with the residents. We designed and executed User Study A & B to explore visual and gaze-based communication with residents with the objective of determining design requirements.
We observed the communication between caregivers and residents, then conducted two Wizard of OZ (WOZ) tests by controlling a computer and a robot remotely to play the GUI demo. We observed and recorded these two interaction procedures to assess participants’ experiences.
I video recorded the study and audio recorded the interviews with caregivers. I highlighted the significant frames of the study video (e.g. when the resident seemed to get alerted) in NVivo and noted my observation. I then proceeded to use open coding for these two sources of data.
Data Coding & Analysis
All video and audio recordings had been compiled into a single video file, showing each stage of the study from two different perspectives on one screen. This compiled data file was then imported into NVivo to process and code textual and multimedia data. Each key frame of the video had been noted with a concise description.
Then, along with residents’ response being classified, each of these highlighted frames was categorized based on the form of communication, the type of information, and the kind of interaction. I transcribed the after-study interviews with caregivers and included them as the second source of data. I used open, axial coding to examine the data reflectively, identify relevant themes and topics, and then refine and relate them. This inductive and deductive method of data analysis helped me build patterns and find support from the data.
Maintaining residents’ mental functions is a big mission for facilities such as DDA. This required regular and frequency training in cognition and memory. At the group home, caregivers use images of various degrees of abstraction to improve or maintain residents’ comprehension and memory of repeated events or objects in their daily lives. The goal of this kind of training is to enable residents to make connections and be able to reference proactively.
Findings on Visual Hierarchy
Visual hierarchy is the sequence in which users perceive information from a user interface. It starts from real objects and gets stepped up to more abstract line drawings and text. This figure shows the order of different types of visual representation based on the difficulty level of understanding the information. As indicated in the figure, real objects are the simplest form of visual communication for people to understand, whereas abstract writing being the most complex form. This hierarchy, however, does not work for every individual. Some people may find photos easier to understand than miniatures.
When your screen transitions, the back colour changed. Keep it white or keep it one colour. As a visual thing, not only that’s distracting but also it affects their ability to focus on it. Because they’re already struggling to focus on something in the first place, once you start moving those things, they are really subtle to us. People with developmental disabilities tend to have a lot of sensory issues.
P1, Caregiver, Female
Findings on Interaction Design
When the social robot tries to construct a context for users, in addition to visual aspects, tasks presented by the robot need to be divided into baby steps, which means that each step should only refer to a single action. In the context of GUI design, there should be only primary action on each screen. The real challenge of applying this principle into our design is the short-time memory of our target users, who will get lost immediately if there is not a strong coherence and continuity of graphic information. Hence, we need to leave enough time for users to process the information between each little steps.
Identified residents’ challenges and needs through designing and testing GUI prototypes.
I have gained findings regarding designing visual language for our user group. For instance, people with DD are very sensitive to colours. Through user studies, caregivers suggested that highly contrast visuals and consistency of using colours in the GUI help residents perceive and understand the information. It is advisable that the background colour of the GUI is kept one colour – it could be simply white or a pale colour (e.g. orange) throughout the demo. A smooth, gradual transition of colour is not a problem for the general population, but for our user group, it raises users’ confusion.
Explored how motion and social distance affect trust and acceptance in HRI.
My exploration of proxemics showed that the motion of social robots enriches robots’ social behaviours and enhances users’ engagement through modulating users’ spatial perception. This finding suggests that motion design of social robots should be taken into consideration as many of existing social robots are static. Additionally, we found that trust and acceptance are of critical significance in social interaction. I observed how residents reacted when their spatial perception got stimulated and assessed their experience when the personal space was being “violated". I also compared their responses (e.g. gaze) when the robot kept moving versus being stationary.
Evaluated how residents experience the technical interactions that we designed.
The introduction of technology creates possibilities of enhancing residents’ social lives, although it also brought problems: the disengagement due to the lack of trust. There were many factors contributing to this shortcoming, including the physical embodiment and the “loose" communication. Residents knew that it was simply a computer when it was playing the demo, and the information from the demo was quite general. By contrast, caregivers, with whom residents were very familiar, could have personalized communication with residents. On the other hand, the robot increases the trust because of its human-like features such as motion.
I presented our iterative design process in the user studies, and this approach has helped me achieve a user-centric design. Through three user studies, I gained insights to this very special population, who shared so many similarities with other demographic groups like people with Alzheimer’s, autism, or simply the elderly. At the same time, we found what makes our user population special: their challenges, needs, expectations, etc. I had preliminary verification of some of our assumptions regarding interaction design in the context of human-robot interaction.
As designers, we need to keep the overall objective of the workflow in mind while tweak- ing details of each part. Besides, a single user group may contain many individual disparities. For example, even though we had a rather small user group of 3, we could still notice the evident divergence of cognitive abilities, skills, characteristics, preferences, etc. We should also think out all levels or categories of users in the same group, and create responsive design for them. For example, some people with higher skills are able to handle a richer visual language with animation. In this case, using animation would be a good way to communicate with them. For the rest of users, keeping a simple visual language would be a better choice.
By conducting user studies, I have sensed the potential of socially assistive robots for improving the live quality of our users. I also have learned much more about the special needs and behaviour patterns of our users from their caregivers. Caregivers’ insights are invaluable to our future designs as we reiterate the prototyping development.
In this study, we made many propositions for HRI design for people with developmental disabilities. For example, the visual language needs to be concise and be based on users’ real-life experience. For this user group, mentioning anything that is off the context will be invalid due to their limited abilities to dual and cross reference. Their memory functions are generally low, and so are their cognitive skills due to mental impairments. Therefore, any piece of information from the robot needs to be very clear, simple, and repeated to reinforce users’ perception and acceptance. The visual information should be based on simple photos and graphics that residents have already learned from their regular training. The introduction to new graphical elements will be a risky move because of their low cognition.