Verifiable Machine Ethics in Changing Contexts

Louisa A. Dennis, Martin Mose Bentzen, Felix Lindner, Michael Fisher

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Many systems proposed for the implementation of ethical reasoning involve an encoding of user values as a set of rules or a model. We consider the question of how changes of context affect these encodings. We propose the use of a reasoning cycle, in which information about the ethical reasoner’s context is imported in a logical form, and we propose that context-specific aspects of an ethical encoding be prefaced by a guard formula. This guard formula should evaluate to true when the reasoner is in the appropriate context and the relevant parts of the reasoner’s rule set or model should be updated accordingly. This architecture allows techniques for the model-checking of agent-based autonomous systems to be used to verify that all contexts respect key stakeholder values. We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework.
Original languageEnglish
Title of host publicationProceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021)
Number of pages9
Publication date2021
Publication statusPublished - 2021
Event35th AAAI Conference on Artificial Intelligence - Virtual Conference
Duration: 2 Feb 20219 Feb 2021
Conference number: 35

Conference

Conference35th AAAI Conference on Artificial Intelligence
Number35
LocationVirtual Conference
Period02/02/202109/02/2021

Fingerprint

Dive into the research topics of 'Verifiable Machine Ethics in Changing Contexts'. Together they form a unique fingerprint.

Cite this