Who's to blame when HAL kills again?
Alkis Gounaris
National and Kapodistrian University of Athens, Department of Philosophy
Abstract: Most of us met HAL 9000 as the lead character in Stanley Kubrick's film "2001, A Space Odyssey", which was based on Arthur Clarke's screenplay and short stories. HAL, a Heuristically Programmed Algorithmic Computer –a sophisticated form of Artificial Intelligence (AI)– decides to kill the spaceship crew and gain control of the spaceship in which it was stationed, in order to ensure the success of its mission, when it realized it was under threat.
On the threshold of 21st century technology breakthroughs, Daniel Dennett, who was then involved in a costly but ultimately unsuccessful program for the development of an intelligent AI, published his popular paper on computer ethics regarding the development of AI technology, by asking the question "When HAL Kills, Who's to Blame?".
Two decades later, rephrasing Daniel Dennett's classic question, I try to stress the need for an up-to-date philosophical answer, in the light of both current developments in the AI field as well as those of the immediate future.
Keywords: Ethics of AI, Responsibility Gap, Autonomous Weapons
DOI: https://doi.org/10.12681/ethiki.22772
Gounaris, A. (2019). Who's to blame when HAL kills again? Ithiki. V.12. pp. 4-10
[Read the article in pdf here]
Read full text in Greek