Human Cognition and Artificial Intelligence: Searching for the fundamental differences of meaning in the boundaries of metaphysics
4th National Conference on Cognitive Science, Humans and Robots Symposium: Common principles and fundamental differences, Hellenic Cognitive Science Society, Athens, 2013. DOI: 10.13140/RG.2.2.17433.67681
Alkis Gounaris
National and Kapodistrian University of Athens
Extended Synopsis: While trying to detect common principles and fundamental differences between Human Cognition (HC) and Artificial Intelligence (AI), it is often expedient to look back into the philosophical foundations to face questions that we tend to casually bypass. Such questions, mainly of an epistemological and ontological character, are related to the “nature” of knowledge and signification and more specifically to the way the world has -or can acquire- meaning for cognitive beings. The initial stages of such an investigation demonstrate the basic ontological differences between HC and AI and contribute to the review of the paradigm of computational theory, as well as to the credibility of alternative approaches on embodied cognition.
In the present announcement, it will be claimed that through certain explanatory models we are allowed to have a realistic view of human cognition, whereas artificial intelligence is confined (at least up to the contemporary computational models) in an idealistic perspective.
According to these two ontologically distinct positions, either the world has given characteristics in which “meanings” are included, irrespective of how they are imprinted, represented or signified by each and every cognitive being, or the cognitive being projects a world of its own, resulting in each reality of the world -consequently meaning as well- being more or less the reflection of internal mental processes or internal representations of the cognitive being. Between these two positions, that Varela, Thompson και Rosh[1] call the “chicken” position and the “egg” position, there is an array of interesting, some even groundbreaking theories, that aim at solving the problem of how meaning and knowledge are formed.
This ontological distinction between realism and idealism, isn’t just theoretical but defines -as noted by Shapiro[2] - the limits of the possibilities and action of cognitive beings, since between perception, meaning and action (that is, interaction with the world) a feedback loop is formed, whereby perception defines action and action defines perception etc. Therefore the action limits of the “egg”, in this sense, will be different from the action limits of the “chicken”, in case the world and meaning (as with “chicken”) is not inside some universe of mental representations.
As mental representations here, we can mean anything that comprises as “content” in the brain all the information from the external world[3] or anything that can have symbolic and semantic character within the cognitive being[4].
While investigating the metaphysical foundations of HC and AI, one could easily ascertain that the “nature” of AI based on representations is limited in an idealistic view of the world, as in every computational process the world can only be recognized according to certain “internalized” rules[5], whereas the “nature” of HC needs further investigation in order to be defined.
The existence of representations delimited early on the extent of the world of computing machines as “it was proven that it is very difficult to reproduce in a computer’s internal representation the necessary richness of the environment that would cause the desirable behavior of a robot of high adaptability”[6].
This is the exact problem that Brooks tried to confront towards the end of the ’80s, when while trying to overcome the limits of internal representations, he developed a “different approach during which a moving robot uses the world itself as its own representation - by always referring to its sensors instead to an internal model of the world”[7]. This endeavor, while in reality it did not free AI from representations (in a broad sense), it opened the road for embodied cognition theories to cross over from metaphysics to robotics labs. At the same time, Hubert Dreyfus, ruling out the convergence of HC and AI, considered that AI is impossible to be essentially free from representations. As he characteristically emphasized, the things and the events of the world among which we live, are charged with “meaning” and “do not constitute some model of the world stored in our mind or brain... On the contrary, they are the world itself.”[8] A few years later, van Gelder claimed that “our fundamental way of interaction with the environment is neither by representing it nor even by exchanging inputs and outputs.”[9] He suggested solutions through dynamic systems that could respond to events of the world without necessarily representing it. This turn towards a more realistic version of meaning, partly changed the way we perceive both human cognition and artificial intelligence, albeit without a realistic hypothesis for meaning, able to be ontologically founded with convincing arguments.
Embodied cognition theories that were adopted by AI are basically “compatible” with computational theories, something that restricts them to “internal” worlds (Brooks, Agre[10], Wheeler[11], Ramsey[12] et al.) However, besides those “compatible” theories, certain “incompatible” ones have been suggested (Varela, Thompson[13], Chemero[14], Gounaris[15]) that could showcase a convincing version of a “robust” realism regarding meaning, as it is formed in the presence of cognitive beings such as humans.
In this presentation, a comprehensive reference will be made to two such “incompatible” positions of embodied cognition, which essentially eliminate internal mental phenomena, transferring meaning into the real world. The first, Chemero’s “phenomenological realism”[16], hails from J.J. Gibson’s[17] affordances and gives an ontic character to meaning. The second one, mine (Gounaris [15],[18]), traces its origins from a systemic view of the intelligent being as an inseparable part of the world and gives meaning a systemic property character.
Such a realistic version of meaning liberates cognitive sciences from pursuing it in symbolic, typical or other “internalized” structures and at the same time separates the world of human cognition from that of artificial intelligence, by putting clear metaphysical boundaries between them.
Keywords: realism, human cognition, artificial intelligence, non-representational cognitive science, embodied cognition
[1] Varela, F., Thompson, E., and Rosch, E. (1991) The Embodied Mind: Cognitive Science and Human Experience, Cambridge: MIT Press
[2] Shapiro, L. A. (2011). Embodied cognition, Ν.Υ.: Routledge
[3] Pitt, D. (2012) Mental Representation, The Stanford Encyclopedia of Philosophy. Zalta, E.N. (ed.). http://plato.stanford.edu/archives/win2012/entries/mental-representation/
[4] Fodor, J. (1981) Representations, Cambridge, Mass.: MIT Press
[5] Dreyfus, H. (1992) What Computers Still Can’t Do, A Critique of Artificial Reason, MIT Press
[6] Feigenbaum, E. Artificial Intelligence: Themes in the second Decade, IFIP Congress ’68, Final Supplement p. J-13
[7] Brooks, R. (1988) Intelligence without Representation, in Haugeland, J. (Ed.) Mind Design, MIT Press
[8] Dreyfus, H. (1992) What Computers Still Can’t Do, A Critique of Artificial Reason, MIT Press, p.265-266
[9] Van Gelder (1997) Dynamics and Cognition, in Haugeland, J. (Ed.) Mind Design II, A Bradford Book, Cambridge, MA: The MIT Press, p. 439, 448
[10] Agre, P. (1988) The Dynamic Structure of Everyday Life, MIT AI Technical Report, Is.no.1085, October 1988, chapter 1, Section A1a, 9
[11] Wheeler, M. (2002) Change in the Rules: Computers, Dynamical Systems, and Searle, in Preston, J. and Bishop, M. (Eds.) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford: Clarendon Press p.345
[12] Ramsey, W. (2007) Representation Reconsidered, Cambridge: Cambridge University Press
[13] Thompson, E., and F. Varela (2001) Radical embodiment: Neural dynamics and consciousnes, in Trends in Cognitive Sciences. 5, 418–425
[14] Chemero, A., and M. Turvey (2007) Gibsonian affordances for roboticists Adaptive Behavior 15, p.473-480
[15] Gounaris, A. (2012). A Naturalistic Explanation of Meaning within Embodied Cognition. 2nd National Conference on the Philosophy of Science, National and Kapodistrian University of Athens
[16] Chemero, A. (2009) Radical embodied cognitive science, MIT Press
[17] Gibson, J. J. (1977) The theory of affordances, in Shaw, R. and Bransford, J. (eds.) Perceiving, acting, and knowing: Toward an ecological psychology. Hillsdale, NJ: Erlbaum p. 67-82
[18] Gounaris, Α. (2011). Heidegger, Neurosciences and the Exemption from the Descartes’ Error. Dasein Lab Workshop: Philosophy and Neurosciences. Athens
Read full text in Greek