AI and Consciousness:  Theoretical foundations and current approaches

 

In the last ten years there has been a growing interest towards the field of artificial consciousness. Several researchers, also from traditional Artificial Intelligence, addressed the hypothesis of designing and implementing models for artificial consciousness (sometimes referred to as machine consciousness or synthetic consciousness) – on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness (Baars, 1988; Minsky, 1991; McCarthy, 1995; Edelman and Tononi, 2000; Jennings, 2000; Aleksander, 2001; Baars, 2002; Franklin, 2003; Kuipers, 2005; Adami, 2006; Minsky, 2006; Chella and Manzotti, 2007).

The traditional field of Artificial Intelligence is thus flanked by the seminal field of artificial or machine consciousness (sometimes machine or synthetic consciousness) aimed at reproducing the relevant features of consciousness using non biological components. According to Ricardo Sanz, there are three motivations to pursue artificial consciousness (Sanz, 2005):

1) implementing and designing machines resembling human beings (cognitive robotics);

2) understanding the nature of consciousness (cognitive science);

3) implementing and designing more efficient control systems.

The current generation of systems for man-machine interaction shows impressive performances with respect to the mechanics and the control of movements; see for example the anthropomorphic robots produced by the Japanese companies and universities. However, these robots, currently at the state of the art, present only limited capabilities of perception, reasoning and action in novel and unstructured environments. Moreover, the capabilities of user-robot interaction are standardized and well defined.

A new generation of robots and softbots aimed at interacting with humans in an unconstrained environment shall need a better awareness of their surroundings and of the relevant events, objects, and agents. In short, the new generation of robots and softbots shall need some form of “artificial consciousness”.

Epigenetic robotics and synthetic approaches to robotics based on psychological and biological models have elicited many of the differences between the artificial and mental studies of consciousness, while the importance of the interaction between the brain, the body and the surrounding environment has been pointed out (Chrisley, 2003; Rockwell, 2005; Chella and Manzotti, 2007; Manzotti, 2007).

In the field of artificial intelligence there has been a considerable interest towards consciousness. Marvin Minsky was one of the first to point out that “some machines are already potentially more conscious than are people, and that further enhancements would be relatively easy to make. However, this does not imply that those machines would thereby, automatically, become much more intelligent. This is because it is one thing to have access to data, but another thing to know how to make good use of it.” (Minsky, 1991)

The target of researchers involved in recent work on artificial consciousness is twofold: the nature of phenomenal consciousness (the so-called hard problem) and the active role of consciousness in controlling and planning the behaviour of an agent. We do not know yet if it is possible to solve the two aspects separately.

The goal of the workshop is to examine the theoretical foundations of artificial consciousness as well as to analyze current approaches to artificial consciousness.

According to Owen Holland (Holland, 2003) and following Searle's distinction between Weak and Strong AI, it is possible to distinguish between Weak Artificial Consciousness and Strong Artificial Consciousness:

  • Weak Artificial Consciousness: design and construction of machine that simulates consciousness or cognitive processes usually correlated with consciousness.

  • Strong Artificial Consciousness: design and construction of conscious machines.

Most of the people currently working in the field of Artificial Consciousness would embrace the former definition. In any case, the boundaries between the two are not always easy to define. For instance, if a machine could exhibit all behaviours normally associated with a conscious being, could we reasonably deny it the status of conscious machine? Conversely, if a machine could exhibit all such behaviours, is it really possible it might not be subjectively conscious?

Most mammals seem to show some kind of consciousness – in particular, human beings. Therefore, it is highly probable that the kind of cognitive architecture responsible for consciousness has some evolutionary advantage. Although it is still difficult to single out a precise functional role for consciousness, many believe that consciousness endorses more robust autonomy, higher resilience, more general capability for problem-solving, reflexivity, and self-awareness (Atkinson, Thomas et al., 2000; McDermott, 2001; Franklin, 2003; Bongard, Zykov et al., 2006).

There are a few areas that cooperate and compete in order to outline the framework of this new field: 1) embodiment , 2) simulation and depiction, 3) environmentalism or externalism, 4) extended control theory. None of them is completely independent of the others. They strive to reach a higher level of integration.

Embodiment tries to address the issues of symbol grounding, anchoring, and intentionality. Recent work emphasizing the role of embodiment in grounding conscious experience goes beyond the insights of Brooksian embodied AI and discussions of symbol grounding (Harnad, 1990; Harnad, 1995; Ziemke, 2001; Holland, 2003; Bongard, Zykov et al., 2006). On this view, a crucial role for the body in an artificial consciousness will be to provide the unified, meaning-giving locus required to support and justify attributions of coherent experience in the first place.

Simulation and depiction deal with synthetic phenomenology developing models of mental imagery, attention, working memory. Progress has been made in understanding how imagination- and simulation-guided action (Hesslow, 2003), along with the "virtual reality metaphor" (Revonsuo, 1995), are crucial components of being a system that is usefully characterized as conscious. Correspondingly, a significant part of the recent resurgence of interest in machine consciousness has focused on giving such capacities to robotic systems (e.g., Cotterill, 1995; Stein and Meredith, 1999; Chella, Gaglio et al., 2001; Ziemke, 2001; Hesslow, 2002; Taylor, 2002; Haikonen, 2003; Holland, 2003; Aleksander and Morton, 2005; Shanahan, 2005)

Environmentalism focuses on the integration between the agent and its environment. The problem of situatedness can be addressed adopting the externalism view where the vehicles enabling consciousness extend themselves to part of the environment (Drestke, 2000; O' Regan and Noe, 2001; Noë, 2004; Manzotti, 2006).

Finally, there is a strong overlapping between current control theory of very complex system and the role that is played by a conscious mind. A fruitful approach could be the study of artificial consciousness as a kind of extended control loop (Chella, Gaglio et al., 2001; Sanz, 2005; Bongard, Zykov et al., 2006)

There have also been proposals that AI systems may be well-suited or even necessary for the specification of the contents of consciousness (synthetic phenomenology), which is notoriously difficult to do with natural language (Chrisley, 1995).

One line of thought (Dennett, 1991; McDermott, 2001; Sloman, 2003) sees the primary task in explaining consciousness to be the explanation of consciousness talk, or representations of oneself and others as conscious. On such a view, the key to developing artificial consciousness is to develop an agent that, perhaps due to its own complexity combined with a need to self-monitor, finds a use for thinking of itself (or others) as having experiential states.

 
References

Adami, C. (2006). “What Do Robots Dreams Of?” Science 314 (5802): 1093-1094.

Aleksander, I. (2000). How to Build a Mind. London, Weidenfeld & Nicolson.

Aleksander, I. (2001). “The Self 'out there'.” Nature 413: 23.

Aleksander, I. and H. Morton (2005). “Enacted Theories of Visual Awareness, A Neuromodelling Analysis”. in BVAI 2005, LNCS 3704.

Atkinson, A. P., M. S. C. Thomas, et al. (2000). “Consciousness: mapping the theoretical landscape.” Trends in Cognitive Sciences 4 (10): 372-382.

Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge, Cambridge University Press.

Baars, B. J. (2002). “The Conscious Access Hypothesis: origins and recent evidence.” Trends in Cognitive Sciences 6 (1): 47-52.

Bongard, J., v. Zykov, et al. (2006). “Resilient Machines Through Continuous Self-Modeling.” Science 314 (5802): 1118-1121.

Chella, A., S. Gaglio, et al. (2001). “Conceptual representations of actions for autonomous robots.” Robotics and Autonomous Systems 34 (4): 251-264.

Chella, A. and R. Manzotti (2007). Artificial Consciousness. Exeter (UK), Imprint Academic.

Chrisley, R. (1995). “Non-conceptual Content and Robotics: Taking Embodiment Seriously”. in Android Epistemology. F. K, G. C and H. P., Cambridge, AAAI/MIT Press: 141-166.

Chrisley, R. (2003). “Embodied artificial intelligence.” Artificial Intelligence 149: 131-150.

Cotterill, R. M. J. (1995). “On the unity of conscious experience.” Journal of Consciousness Studies 2: 290-311.

Dennett, D. C. (1991). Consciousness explained. Boston, Little Brown and Co.

Drestke, F. (2000). Perception, Knowledge and Belief. Cambridge, Cambridge University Press.

Edelman, G. M. and G. Tononi (2000). A Universe of Consciousness. How Matter Becomes Imagination. London, Allen Lane.

Franklin, S. (2003). “IDA: A Conscious Artefact?” in Machine Consciousness. O. Holland. Exeter (UK), Imprint Academic.

Haikonen, P. O. (2003). The Cognitive Approach to Conscious Machine. London, Imprint Academic.

Harnad, S. (1990). “The Symbol Grounding Problem.” Physica D (42): 335-346.

Harnad, S. (1995). “Grounding symbolic capacity in robotic capacity”. in "Artificial Route" to "Artificial Intelligence": Building Situated Embodied Agents. L. Steels and R. A. Brooks. New York, Erlbaum.

Hesslow, G. (2002). “Conscious thought as simulation of behaviour and perception.” Trends in Cognitive Sciences 6 (6): 242-247.

Hesslow, G. (2003). Can the simulation theory explain the inner world? Lund (Sweden), Department of Physiological Sciences.

Holland, O., (2003). Machine consciousness. New York, Imprint Academic.

Jennings, C. (2000). “In Search of Consciousness.” Nature Neuroscience 3 (8): 1.

Kuipers, B. (2005). “Consciousness: drinking from the firehose of experience”. in National Conference on Artificial Intelligence (AAAI-05).

Manzotti, R. (2005). “The What Problem: Can a Theory of Consciousness be Useful?” in Yearbook of the Artificial. P. Lang. Berna.

Manzotti, R. (2006). “An alternative process view of conscious perception.” Journal of Consciousness Studies 13 (6): 45-79.

Manzotti, R. (2007). “From Artificial Intelligence to Artificial Consciousness”. in Artificial Consciousness. A. Chella and R. Manzotti. London, Imprint Academic.

McCarthy, J. (1995). “Making Robot Conscious of their Mental States”. in Machine Intelligence. S. Muggleton. Oxford, Oxford University Press.

McDermott, D. (2001). Mind and Mechanism. Cambridge (Mass), MIT Press.

Minsky, M. (1991). “Conscious Machines”. in Machinery of Consciousness, National Research Council of Canada.

Minsky, M. (2006). The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York, Simon & Schuster.

Noë, A. (2004). Action in Perception. Cambridge (Mass), MIT Press.

O' Regan, K. and A. Noe (2001). “A sensorimotor account of visual perception and consciousness.” Behavioral and Brain Sciences 24 (5).

Revonsuo, A. (1995). “Consciousness, dreams, and virtual realities.” Philosophical Psychology 8: 35-58.

Rockwell, T. (2005). Neither ghost nor brain. Cambridge (Mass), MIT Press.

Sanz, R. (2005). “Design and Implementation of an Artificial Conscious Machine”. in IWAC2005, Agrigento.

Shanahan, M. P. (2005). “Global Access, Embodiment, and the Conscious Subject.” Journal of Consciousness Studies 12 (12): 46-66.

Sloman, A. (2003). “Virtual Machines and Consciousness.” Journal of Consciousness Studies 10 (4-5).

Stein, B. E. and M. A. Meredith (1999). The merging of the senses. Cambridge (Mass), MIT Press.

Taylor, J. G. (2002). “Paying attention to consciousness.” Trends in Cognitive Sciences 6 (5): 206-210.

Ziemke, T. (2001). “The Construction of 'Reality' in the Robot: Constructivist Perpectives on Situated Artificial Intelligence and Adaptive Robotics.” Foundations of Science 6 (1-3): 163-233.