Ethical Design and Acceptability of Artificial Social Agents
Artificial Social Agents (ASA), which are AI software driven entities programmed with rules and preferences to act autonomously with humans, are increasingly playing more human-like roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. The aim of this thesis is to investigate what ethical principles are of relative importance when people engage with ASAs and if there is a relationship between people’s values and the ethical principles they prioritise. The study uses the five AI4People Ethical principles (Beneficence, Non-maleficence, Autonomy, Justice, and Explicability) and Schwartz’s theory of human values. Scenarios with embedded ethical principles that involved an ASA taking on a role traditionally played by a human were created to understand the types of ASA attributes that are acceptable or unacceptable. We found that participants are most sensitive to ASA attributes that relate to Autonomy, Justice, Explicability, and the privacy of their personal data; and ASAs were more acceptable when used generally in society rather than personally. Models were created using Schwartz’s Refined Values as a possible indicator of how stakeholders discern and prioritise the different AI4People ethical principles when interacting with ASAs.