The conversational ethical reasoning robot Immanuel is presented. Immanuel is capable of defending multiple ethical views on morally delicate situations. A study was conducted to evaluate the acceptance of Immanuel. The participants had a conversation with the robot on whether lying is permissibile in a given situation. The robot first signaled uncertainty about whether lying is right or wrong in the situation, then disagreed with the participant’s view, and finally asked for justification. The results indicate that participants with a higher tendency to utilitarian judgments are initially more certain about their view as compared to participants with a higher tendency to deontological judgments. These differences vanish at the end of the dialogue. Lying is defended and argued against by both utilitarian and deontologically oriented participants. The diversity of the reported arguments gives an idea of the variety of human moral judgment. Implications for the design and application of morally competent robots are discussed.
|Title of host publication||2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)|
|Publication status||Published - 2017|
|Event||26th IEEE International Sympiosium on Robot and Human Interactive Communication - Pestane Palace Hotel, Lisboa, Portugal|
Duration: 28 Aug 2017 → 1 Sep 2017
|Conference||26th IEEE International Sympiosium on Robot and Human Interactive Communication|
|Location||Pestane Palace Hotel|
|Period||28/08/2017 → 01/09/2017|