Social robots can reason and act while taking into accountsocial and cultural structures, for instance by complying withsocial or ethical norms or values. As social robots are likely to becomemore common and advanced and thus likely to interact withhuman beings in increasingly complex situations, ensuring safety insuch situations will become very important. In this chapter, I investigatethe safety of social robots, focusing on the idea that robotsshould be logically guaranteed to act in a certain way, here calledlogic-based inherent safety. A meta-logical limitation of a particularprogram for logic-based safety for ethical robots is shown. Afterwards,an empirical study is used to show that there is a clash betweendeontic reasoning and most formal deontic logics. I give anexample as to how this clash can cause problems in human-robot interaction.I conclude that deontic logics closer to natural languagereasoning are needed and that logic only should play a limited partin the overall safety architecture of a social robot, which should alsobe based on other principles of safe design.
|Title of host publication||Philosophy and Engineering : Exploring Boundaries, Expanding Connections|
|Editors||Diane P. Michelfelder, Byron Newberry, Qin Zhu|
|Publication status||Published - 2017|
|Series||Philosophy of Engineering and Technology|
- Social robots
- Deontic reasoning
- Human-robot interaction