The Limits of Logic-Based Inherent Safety of Social Robots

Martin Mose Bentzen

    Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review


    Social robots can reason and act while taking into accountsocial and cultural structures, for instance by complying withsocial or ethical norms or values. As social robots are likely to becomemore common and advanced and thus likely to interact withhuman beings in increasingly complex situations, ensuring safety insuch situations will become very important. In this chapter, I investigatethe safety of social robots, focusing on the idea that robotsshould be logically guaranteed to act in a certain way, here calledlogic-based inherent safety. A meta-logical limitation of a particularprogram for logic-based safety for ethical robots is shown. Afterwards,an empirical study is used to show that there is a clash betweendeontic reasoning and most formal deontic logics. I give anexample as to how this clash can cause problems in human-robot interaction.I conclude that deontic logics closer to natural languagereasoning are needed and that logic only should play a limited partin the overall safety architecture of a social robot, which should alsobe based on other principles of safe design.
    Original languageEnglish
    Title of host publicationPhilosophy and Engineering : Exploring Boundaries, Expanding Connections
    EditorsDiane P. Michelfelder, Byron Newberry, Qin Zhu
    Publication date2017
    ISBN (Print)978-3-319-45191-6
    ISBN (Electronic)978-3-319-45193-0
    Publication statusPublished - 2017
    SeriesPhilosophy of Engineering and Technology


    • Social robots
    • Safety
    • Logic
    • Deontic reasoning
    • Human-robot interaction


    Dive into the research topics of 'The Limits of Logic-Based Inherent Safety of Social Robots'. Together they form a unique fingerprint.

    Cite this