Login
I forgot my password
Register
homestyle
Javascript must be enabled to continue!
 Page 2 of 2: << <


Licensed to Kill: Autonomous Weapons as Persons and Moral Agents

The debate over the attribution of personhood to non-human entities is of an increasing concern to both academia and institutions. The intelligence, autonomy and efficiency exhibited by modern AI systems, raise pressing questions regarding the moral responsibility issues their use entails. In our paper we focus our discussion on autonomous war machines, as their actions, design, production and use cause philosophical controversies. [PERSONHOOD, 2020]

Read more




Can we literally talk about artificial moral agents?

Presentation for the 6th Panhellenic Conference in Philosophy of Science | Department of History and Philosophy of Science, NKUA, Athens, Greece, 03-05 December 2020 |DOI: 10.13140/RG.2.2.13671.47520

Distinguishing between a quasi moral agent and a literally moral agent, I will attempt to describe those conditions beyond autonomy and behavior that must be met, in order to attribute the traits of a moral agent to an artificial intelligence system. Such a system, in addition to duties, could potentially have rights, obligations and responsibilities, coexisting with other intelligent beings or systems in a possibly revised form of social fabric. [NKUA, 2020]

Read more




Human Cognition and Artificial Intelligence: Searching for the fundamental differences of meaning in the boundaries of metaphysics

4th National Conference on Cognitive Science, June 2013, Athens, Greece. DOI: 10.13140/RG.2.2.17433.67681

While trying to detect common principles and fundamental differences between Human Cognition (HC) and Artificial Intelligence (AI), it is often expedient to look back into the philosophical foundations to face questions that we tend to casually bypass. Such questions, mainly of an epistemological and ontological character, are related to the “nature” of knowledge and signification and more specifically to the way the world has -or can acquire- meaning for cognitive beings.

Read more




Artificial Intelligence: Life in the second half of the chessboard

Life in the second half of the chessboard is expected to be exciting and scary at the same time. New challenges and opportunities will soon give prominence to new masters of the game, new services, new products, new lifestyles –and we will all adapt and embrace these developments, making them an integral part of our daily lives, just about as we did in the past with cars, mobile phones and the internet.

Read more




What is it like to be a machine?

The question "What is it like to be a machine?" refers to the subjective experience of a cognitive agent, that happens to be a thinking machine, consisting of microchips, transistors, cables, sensors etc. At first we may assume that the subjective experience of being a machine will be different from the subjective experience of being human, therefore a human cannot "experience" the world as a machine, neither a machine can "experience" the world as a human.

Read more




Who's to blame when HAL kills again?

Most of us met HAL 9000 as the lead character in Stanley Kubrick's film "2001, A Space Odyssey", which was based on Arthur Clarke's screenplay and short stories. HAL, a Heuristically Programmed Algorithmic Computer –a sophisticated form of Artificial Intelligence (AI)– decides to kill the spaceship crew and gain control of the spaceship in which it was stationed, in order to ensure the success of its mission, when it realized it was under threat.

Read more




The Algorithm of the Digital Humanism

According to Sartre, people experience a constant state of existential anxiety, because being "condemned" to freely define our own purpose, at every moment of our lives we must make choices that ultimately determine who we are.

Read more




Heidegger and Artificial Intelligence

Dasein Lab Workshop, 2010

An introduction to Heideggerian Artificial Intelligence on the occasion of the translation in Greek of the proposal by Hubert Dreyfus (2007): “Why Heideggerian AI failed and how fixing it would require making it more Heideggerian”.

Read more











 Page 2 of 2: << <
Top