In the future autonomous robotic systems are expected to be common, not only in factories and on our roads, but in domestic and health-care situations. This new generation of intelligent machines will be required to act autonomously, yet function as part of our society. Societally integrated machines will encounter not just safety issues, but ethical issues as well. There has been a large amount of work in Philosophy on a range of ethical theories. We are also having an enthusiastic media debate on the relevance of having ethical machines and building autonomous systems ethically. This special issue focuses on the challenges of building ethical behaviour into autonomous systems. Key aspects of addressing these changes are explainability and verifiability of the implemented approach, precise and unambiguous formalisation of requirements for ethical behaviour, and special challenges arising from implementing ethical behaviour in systems that have adaptive components, especially learning.
This Special Issue aims to collect together high quality research in this area, combining robot/machine ethics, verification/logic, ethical challenges in machine learning and AI and law.
Topics of interest include, but are not limited to:
- new formalisms (logics, algebras, argumentation, case-based reasoning etc) capturing individual and collective ethics, accountability, etc
- formal modelling techniques for ethical/moral principles
- engineering autonomous systems to incorporate ethical principles
- verification and validation of ethical behaviour
- mechanisms for ethical choice
- explainable ethical behaviour solutions
- human-computer interaction solutions related to machine ethics
- normative multi-agent systems, including organisations, norms, institutions, and socio-cognitive technical systems
- engineering ethics and explainability in machine and deep learning systems
- AI and Law