Can we literally talk about artificial moral agents?
Presentation for the 6th Panhellenic Conference in Philosophy of Science | Department of History and Philosophy of Science, NKUA, Athens, Greece, 03-05 December 2020 |DOI: 10.13140/RG.2.2.13671.47520
National and Kapodistrian University of Athens, Department of Philosophy
Abstract: The current debate regarding the moral status of intelligent machines includes a wide range of arguments. In one direction, we tend to attribute anthropomorphic properties to machines (Nyholm, 2020), due to our interaction with them and possibly due to limitations or the metaphorical use of language. There are many who argue that what should actually concern us is how to make machines safer, not "ethical" (Yampolskiy, 2012). On the contrary, according to the other direction, it is sufficient to evaluate the results and consequences of the machines’ actions, regardless of whether and how they think or operate (Dennet, 1997).
However, the main body of arguments considers the moral status of artificial agents subject to their degree of autonomy (Tzafestas, 2016) and consequently, to the behavior that machines display towards humans and other machines (Anderson, 2007). In this presentation, adopting the assumption that in the near future, fully autonomous artificial intelligence systems will coexist and interact with humans, animals and other systems in almost all aspects of personal and social life (Tegmark, 2017), I ask the question whether indeed: a) full autonomy in decision-making and b) these systems behaving in line with a framework of principles and rules, is enough to literally talk about artificial moral agents. Making the distinction between morality and ethics, I will argue that a fully autonomous machine demonstrating sincere, polite, honest, consistent, tolerant, protective, etc. behavior for example, or obeying the laws and respecting the current social and cultural conditions or rules of coexistence, is not sufficient to call it literally a moral machine.
Distinguishing between a quasi moral agent and a literally moral agent, I will attempt to describe those conditions beyond autonomy and behavior that must be met, in order to attribute the traits of a moral agent to an artificial intelligence system. Such a system, in addition to duties, could potentially have rights, obligations and responsibilities, coexisting with other intelligent beings or systems in a possibly revised form of social fabric. An investigation towards this direction can aim at highlighting information regarding the type, characteristics and "personality" of the moral agent and lay the theoretical foundations that could possibly lead to the description of the technical specifications required for its realization.
Gounaris, A. (2020). Can we literally talk about artificial moral agents? Presentation for the 6th Panhellenic Conference in Philosophy of Science. Department of History and Philosophy of Science – NKUA, Athens, Greece. DOI: 10.13140/RG.2.2.13671.47520. Retrieved [25/12/2020] from https://alkisgounaris.gr/en/archives
View online presentation here:
Read full text in Greek