Metrics of Success: Evaluating User Satisfaction in AI Chatbots

Cecilie Grace Møller, Ke En Ang, Maria de Lourdes Bongiovanni , Md Saifuddin Khalid, Jiayan Wu

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

204 Downloads (Orbit)

Abstract

The rapid advancement of Artificial Intelligence (AI), particularly through Large Language Models (LLMs), has catalysed a technological revolution, leading to the widespread adoption of AI-driven chatbots across industries. OpenAI’s customisable generative pre-trained transformer (GPT) offerings have popularised generative AI, enabling organisations of all sizes to implement chatbots for customer support. This development presents an opportunity for businesses to offer 24/7, cost-efficient customer service that can overcome the historical limitations of chatbots that lack a "human element." However, despite the proliferation of AI chatbots, there remains a crucial need to evaluate their effectiveness in meeting user needs and preferences for human-like interaction. Current service quality assessment tools, such as SERVQUAL and E-SERVQUAL, are unable to evaluate AI-specific capabilities like language intelligence and recognition. Existing research also lacks information on factors that affect user satisfaction and the continued use of AI chatbots. Based on a mixed-methods study, this paper proposes a new instrument for measuring user satisfaction with AI chatbots, specifically for customer support roles. Using the Stanford five-step Design Thinking Process, this study devised a customer support AI chatbot evaluation instrument through a literature review, Cheatstorming, and SCAMPER techniques, followed by testing in a Danish company. The research employs Prentice and Nguyen’s three-stage scale development process to ensure content, reliability, and construct validity, addressing gaps in current scholarship and advancing understanding of AI chatbot user satisfaction.
Original languageEnglish
Title of host publicationProceedings of the 8th International Conference on Advances in Artificial Intelligence (ICAAI 2024)
PublisherAssociation for Computing Machinery
Publication date2025
Pages168 - 173
ISBN (Electronic)979-8-4007-1801-4/24/10
DOIs
Publication statusPublished - 2025
Event8th International Conference on Advances in Artificial Intelligence - London, United Kingdom
Duration: 17 Oct 202419 Oct 2024

Conference

Conference8th International Conference on Advances in Artificial Intelligence
Country/TerritoryUnited Kingdom
CityLondon
Period17/10/202419/10/2024

Keywords

  • Artificial intelligence
  • Chatbots
  • User satisfaction
  • Scale development
  • AI chatbot evaluation

Fingerprint

Dive into the research topics of 'Metrics of Success: Evaluating User Satisfaction in AI Chatbots'. Together they form a unique fingerprint.

Cite this