TY - JOUR
T1 - Experiences of a Speech-enabled Conversational Agent for the Self-report of Well-being among People Living with Affective Disorders
T2 - An In-the-Wild Study
AU - Maharjan, Raju
AU - Doherty, Kevin
AU - Rohani, Darius Adam
AU - Bækgaard, Per
AU - Bardram, Jakob E.
N1 - Publisher Copyright:
© 2022 Association for Computing Machinery.
PY - 2022/6
Y1 - 2022/6
N2 - The growing commercial success of smart speaker devices following recent advancements in speech recognition technology has surfaced new opportunities for collecting self-reported health and well-being data. Speech-enabled conversational agents (CAs) in particular, deployed in home environments using just such systems, may offer increasingly intuitive and engaging means of self-report. To date, however, few real-world studies have examined users' experiences of engaging in the self-report of mental health using such devices or the challenges of deploying these systems in the home context. With these aims in mind, this article recounts findings from a 4-week "in-the-wild"study during which 20 individuals with depression or bipolar disorder used a speech-enabled CA named "Sofia"to maintain a daily diary log, responding also to the World Health Organization-Five Well-Being Index WHO-5 scale every 2 weeks. Thematic analysis of post-study interviews highlights actions taken by participants to overcome CAs' limitations, diverse personifications of a speech-enabled agent, and unique forms of valuing of this system among users' personal and social circles. These findings serve as initial evidence for the potential of CAs to support the self-report of mental health and well-being, while highlighting the need to address outstanding technical limitations in addition to design challenges of conversational pattern matching, filling unmet interpersonal gaps, and the use of self-report CAs in the at-home social context. Based on these insights, we discuss implications for the future design of CAs to support the self-report of mental health and well-being.
AB - The growing commercial success of smart speaker devices following recent advancements in speech recognition technology has surfaced new opportunities for collecting self-reported health and well-being data. Speech-enabled conversational agents (CAs) in particular, deployed in home environments using just such systems, may offer increasingly intuitive and engaging means of self-report. To date, however, few real-world studies have examined users' experiences of engaging in the self-report of mental health using such devices or the challenges of deploying these systems in the home context. With these aims in mind, this article recounts findings from a 4-week "in-the-wild"study during which 20 individuals with depression or bipolar disorder used a speech-enabled CA named "Sofia"to maintain a daily diary log, responding also to the World Health Organization-Five Well-Being Index WHO-5 scale every 2 weeks. Thematic analysis of post-study interviews highlights actions taken by participants to overcome CAs' limitations, diverse personifications of a speech-enabled agent, and unique forms of valuing of this system among users' personal and social circles. These findings serve as initial evidence for the potential of CAs to support the self-report of mental health and well-being, while highlighting the need to address outstanding technical limitations in addition to design challenges of conversational pattern matching, filling unmet interpersonal gaps, and the use of self-report CAs in the at-home social context. Based on these insights, we discuss implications for the future design of CAs to support the self-report of mental health and well-being.
KW - Conversational agent
KW - Conversational user interface
KW - Mental health
KW - Self-reports
KW - Virtual assistant (VA)
KW - Virtual health assistant
KW - Voice user interface
KW - WHO-5
U2 - 10.1145/3484508
DO - 10.1145/3484508
M3 - Journal article
AN - SCOPUS:85135017535
SN - 2160-6455
VL - 12
JO - ACM Transactions on Interactive Intelligent Systems
JF - ACM Transactions on Interactive Intelligent Systems
IS - 2
M1 - 10
ER -