Abstract
The conversational ethical reasoning robot Immanuel is presented. Immanuel is capable of defending multiple ethical views on morally delicate situations. A study was conducted to evaluate the acceptance of Immanuel. The participants had a conversation with the robot on whether lying is permissibile in a given situation. The robot first signaled uncertainty about whether lying is right or wrong in the situation, then disagreed with the participant’s view, and finally asked for justification. The results indicate that participants with a higher tendency to utilitarian judgments are initially more certain about their view as compared to participants with a higher tendency to deontological judgments. These differences vanish at the end of the dialogue. Lying is defended and argued against by both utilitarian and deontologically oriented participants. The diversity of the reported arguments gives an idea of the variety of human moral judgment. Implications for the design and application of morally competent robots are discussed.
Original language | English |
---|---|
Title of host publication | 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) |
Publisher | IEEE |
Publication date | 2017 |
Pages | 1445-1450 |
DOIs | |
Publication status | Published - 2017 |
Event | 2017 26th IEEE International Symposium on Robot and Human Interactive Communication - Pestane Palace Hotel, Lisboa, Portugal Duration: 28 Aug 2017 → 1 Sept 2017 Conference number: 26 https://ieeexplore.ieee.org/xpl/conhome/8116593/proceeding |
Conference
Conference | 2017 26th IEEE International Symposium on Robot and Human Interactive Communication |
---|---|
Number | 26 |
Location | Pestane Palace Hotel |
Country/Territory | Portugal |
City | Lisboa |
Period | 28/08/2017 → 01/09/2017 |
Internet address |