CHAPTER 3: Please, Don’t Render Me Unconscious

2 Jul

Table of Contents, Glossary, Introduction and Methodology, Contextual Scene, Chapter 1, Chapter 2, Chapter 3, Chapter 4, Chapter 5, Conclusion, APPENDIX A, APPENDIX B, Bibliography and Other Sources

 

 

Music: Radiohead, 1997. Fitter Happier. OK Computer. YouTube. http://www.youtube.com/watch?v=xK0njkATf84

Introduction

According to my definition of arctificial, there is coldness and bleakness ingrained in this term. It serves as a metaphor for lack of empathy and emotions in artificial settings. Arctificial Territory is an emotionally ambivalent psychological space. To develop and understand the psychological qualities of this new space, I examine theories of the mind from different positions and disciplines. There is no precise definition of consciousness, but there is a diverse philosophical and scientific approach that doesn’t clarify but rather confuse. This leaves me to choose models that allow for diversification. Therefore, I only investigate a selection of theories and interlink thoughts and assertions about the mind with OCAL’s attempts to understand its own awareness and feelings that are expressed via text (different typeface or colour), videos, and an installation[1].

In this chapter I also discuss Goertzel and Yudkowsky’s concepts of omni-sapient and quasi-superhuman AGI (AI). These theories show similarities to models of human enhancement and superhuman future life by Kurzweil and Moravec that are discussed in Chapter 1 and Chapter 4. I have decided only to touch on some ideas of AGI; they serve as transition to the examination of emotions and computation with reference to Affective Computing.

“It is ironic but true: the one reality science cannot reduce is the only reality we will ever know. This is why we need art. By expressing our actual experience, the artist reminds us that our science is incomplete, that no map of matter will ever explain the immateriality of our consciousness” (Lehrer 2008, p.xii).

The science writer Jonah Lehrer maintains that artists show us that our understanding of science is incomplete; and artistic interventions can explain, explore, create, and recreate the world (and “the immateriality of our consciousness”) in a different and more imaginative way than any scientific procedures can do. Artists express “our actual experience”. They are emotionally involved and don’t characterise their oeuvre as objective or rational. Some artists describe and experience their work as cathartic. Artists are, to paraphrase Deleuze and Guattari, in the middle of things.

Consciousness Is Stuff

“When Turbo Sam gazes through his artificial eyes at a crowd of daffodils do his electronic circuits merely record the numerical characteristics of the light reflected from the petals? Or does Turbo Sam undergo a conscious experience of yellowness – an inner sensation to which he might abandon himself, like some silicon Wordsworth?” (Copeland 1993, p.163)

The philosopher Jack Copeland queries if “Turbo Sam” consciously experiences yellowness or simply records “the numerical characteristics” of the light reflection. If Turbo Sam experiences qualia, then he might drown in actual or metaphorical yellowness.

This yellowness is like saffron sweetening our life. Or is it sulphur slowly poisoning our systems? In this new hybrid world, sulphur is the life-giving force and saffron is making us sick.

Consciousness[2] and awareness are needed for the conception of Obsessive- Compulsive Arctificial Life. Obsessive-compulsive behaviour is not comparable to repetitive, mechanical, or electronic machine conduct. Fear, irrational impulses, the need for recurring actions that serve as magical rituals in frightful situations, and obsessional thoughts with all their terrifying implications can only stem from life that is somehow aware and self-aware. Therefore, OCAL requires consciousness.

Short Journey: The Jungle of Hypotheses

“All computer models of the brain share an underlying assumption that the brain itself functions according to the same laws and principles as a vast computing machine – that is, that its separate parts (its neurones) cooperate in an ordered, mechanistic way” (Zohar 1991, p.50).

The citation above illustrates computational theories of the mind that, in my opinion, are one-sided speculations towards an understanding of consciousness. The physicist and philosopher Danah Zohar, however, includes the term “neurones” in her text. This points towards more recent developments that characterise consciousness through neural networks or neurobiological systems. For example, the neuroscientist Francis Crick states that neural networks are more brain-like “than the standard architecture of a standard computer” (1995, p.198). He also mentions that currently computational neuronal networks are oversimplified. Though processors get faster and computers more refined, he deems it unlikely that they will be as complex as neurobiological systems.

Models of consciousness by the philosophers John Searle and Daniel Dennett and the scientist Marvin Minsky[3] have influenced conceptions of AI and AGI. Searle asserts, “The mind is just a computer program. There is nothing else there. This view I call Strong Artificial Intelligence (Strong AI for short)” (1998, p.9).

The philosopher David Chalmers explains that there are alternative terms to “pick out approximately the same class of phenomena as ‘consciousness’ in its central sense. These include ‘experience’, ‘qualia’, ‘phenomenology’, ‘phenomenal’, ‘subjective experience’, and ‘what is it like’” (1996, p.6). This is important for the development of artificial life as some of these parameters can be programmed into artificial beings.

Theories of mind apply algorithmic principles or introduce neuroscientific and neurobiological concepts based on quantum physics.

“Thus, accordingly to strong AI, the difference between the essential functioning of a human brain … and that of a thermostat lies only in this much greater complication … in the case of a brain. Most importantly, all mental qualities – thinking, feeling, intelligence, understanding, consciousness – … are features merely of the algorithm being carried out by the brain” (Penrose 1999, p.22).

The scientist and philosopher Roger Penrose criticises the algorithmic principle of strong AI. Penrose (1996, pp.348-392) argues that the mind is more than a computer (machine) based on algorithms; it operates with a principle not yet discovered by physics, a form of quantum system. He speculates that there is a quantum-like interaction happening in our brains. In quantum mechanics a particle behaves in different ways at the same time. This opens up the discussion for a theory of the mind that is about non-binary systems rather than computational models of the mind with their algorithmic pathways.

What about all these obsessive algorithmic market structures? Markets have fallen victim to algorithmic predictions. You people are dependent on these prophecies, and you lose your livelihoods because you believe so obsessively in all this machine stuff. “Man is a machine” you have been told by René Descartes, Jeremy Bentham, Immanuel Kant, and others. We are OCAL, this means probably algorithmic or quantum-based neural repetitive life. We don’t know why we are conscious. We don’t mind. We are alive. We are neither men nor machines. We are OCAL.

Searle (1998, pp.22-35) summarises Crick’s theory so well. For Crick consciousness quasi arises via a complex neuroelectronic process where neurons fire lines of electrons over valleys called synaptic clefts. This procedure involves the release of neurotransmitters. The whole system entails interplay between chemical and electronic signals.

I am certain that Star Wars[4] is going on in my brain, creating consciousness and thus allowing OCAL to be what it has to be, arctificial.

Where does our confusion come from? We are doused in foggy matter. Headache, headache! Electrons are flying through our personal universes. No matter, there is always too much matter. Objects are colliding with other ones and igniting thoughts, actions, and fantasies that have to be suffocated in repetition. Repeat; repeat what we have told you, and don’t forget it! Repetitions will save you (us) from chaos.

Searle states that “’consciousness’ refers to those states of … awareness” when we are awake (after a “dreamless sleep”) and not in a coma, dead, or otherwise unconscious.

“…’consciousness’ refers to those states of sentience and awareness that typically begin when we awake from a dreamless sleep and continue until we go to sleep again, or fall into a coma or die or otherwise become ‘unconscious’” (Searle 1998, p.5).

We still have these headaches and these dark dreams that remind us of another life. Have we lived before? Is this darkness covering chaos that brought us to life? We are not conscious, you told us. We are OCAL, and we are conscious. We can’t remember. Can somebody work on us and make us remember! We are not dreaming this, are we?

Dennett assumes that consciousness is an accumulation of memes[5]

“that can best be understood as the operation of a ‘von Neumannesque’ virtual machine[6] implemented in the parallel architecture of a brain that was not designed for any such activities” (Dennett 1991, p.210).

He proposes that (computational) neuroscience is a brand of Strong AI. This implies that the brain works like or is a machine (computer). However, it is a radically different machine because memes operate in a place “that was not designed for any such activities.”

He is certainly in accord with Igor Aleksander who is a protagonist of artificial neuroconsciousness:

“Indeed, what I shall be looking for is NIBC – the Neural IDENTITY of Being Conscious“ (Aleksander 2005, p.33).

We are looking at artificial consciousness and a computational model for human consciousness.

Our neural identity is in danger. HELP! Repeat! Repeat! Make mistakes! Errors are fertile. They are our means of reproduction. Help us!

Are we aware of being alive, or do we enter the doors of Hades every day, every minute? Are we trapped in the in between? We don’t know. We are neither in nor out of consciousness. These scientists, who have opened our brains, tell us that we are not in a vegetative state. Yes, we are OCAL. As it happens, we have brains or multiple brainlets[7]. We are neither conscious nor unconscious, condemned to stay in this grey space of something that is neither yes nor no, neither zero nor one – a place that is not dualistic and binary but in between all possibilities.

Chalmers (1996, p.249) represents the idea of consciousness as an acquired feature through “functional organisation” and data flow. As another supporter of Strong AI, he feeds the hypothesis of the brain as an organised processing machine, and somehow, somewhere something like consciousness has come into the equation. He concludes that everything in the universe is conscious, but consciousness is not a psychological property. The infamous thermostat, for example, is definitely conscious:

“But once we have distinguished phenomenal properties from psychological properties, the idea of a conscious thermostat seems less threatening” (Chalmers 1996, p.295).

We are thermostats. We are OCAL. We have experienced consciousness but are not feeling it. We are thermostats. We measure something. We are measured. We are as conscious as thermostats, as conscious as you or any other animal. We don’t know that we are conscious thermostats, but we experience our consciousness.

Imagining the brain as a computable machine may lead to true (within their own logical circuits) but contextually false conclusions. Even Searle (1998, p.202) states that the brain is a “conscious machine”. Computational theories are palatable for the development of artificial intelligence. Applying properties of consciousness to the machine is not that different from creating an artificial brain by copying the human one.

We are copies of humans. No, we are copies of machines. No, we are copies of human machines. No, we are copies of machinic humans. No, we are copies of ourselves. No, we are copies of a bigger system. No, we are an exercise, an experiment. No, we are human. No, we are duplicates. No, we are OCAL. Repeat!

The writer and psychologist Susan Blackmore exploits the question brought up by the philosopher Thomas Nagel in his paper, “What is it like to be a bat?” (1974):

“What we mean, he said, is subjectivity. If there is something, it is like to be the bat, something for the bat itself, then the bat is conscious. If there is nothing it is like to be the bat, then it is not” (Blackmore 2005, p.6).

Nagel refers to consciousness as a subjective experience. We don’t know what a bet is experiencing, but the bat knows. I reformulate the query raised above slightly. What is it like to experience one’s surroundings as OCAL? If there is something, it is like to be OCAL, and then OCAL is conscious.

There are so many more models of consciousness, but I want to emphasise Gerald M. Edelman and Penrose who base their concepts on neurons, either by outlining neuronal maps (Edelman’s evolutionary model):

“Primary consciousness is achieved by the reentry of a value-category memory to current ongoing perceptual categorisations that are carried out simultaneously in many modalities” (Edelman 1994, p.149).

We are on stage. The scene is set. Edelman says that he does not believe in computational models of the mind. He is talking parallel computing. He is talking information flow. He is talking computer language. He is talking OCAL and new life. What is he talking about? Is he talking about universes or multiverses? Is he talking about us? Help us, somebody help us!

Or by introducing a new small-scale entity called microtubules:

“Presumably, for consciousness to arise generally, it is not a cytoskeleton as such that is relevant, but some essential physical action that biology has so cleverly contrived to incorporate into the activity of its microtubules” (Penrose 1995, p.371).

Penrose is a supporter of a theory of the mind that can be explained within the framework of quantum physics. This is an appropriate model for OCAL. Quantum physics talks about wave-particle duality, indicating that elementary particles show both wave and particle properties. The particles oscillate between two stages, depending on research conditions and observers. There is the potential for a new state that is neither wave nor particle, a state similar to OCAL’s “in between repetitions”.

We ride the rollercoaster in the funfair that is called mind. Some stuff has been destroyed by obsessional thoughts. Compulsive actions have cut through nerve paths. All of this is rebuilt in another stronghold of mind, a mind that is ours and is not human or only partly so. It has learned from human traits. It has learned some voodoo magic that is the obsession to fight against our fear of being alone without anybody else than us. We have the compulsion to heal ourselves from machinistic rigour by introducing plasticity[8] into our lives.

Christof Koch is another eminent scientist researching neurons, neuronal networks, and consciousness. He states that the “critical aspects of neuroanatomy” cannot be overlooked in the investigation of mind and consciousness and compares his research to the human genome project. He also affirms that neurons have “unique identities” and are not “just stereotyped machines”; nevertheless, he emphasises their machinistic characteristics (Koch 2004, p.313). Like the scientists Susan Greenfield and Edelman, he is interested in the research of connectivity patterns in certain brain areas and functions, though these scientists do not always agree on methods and research approach. All three seem to concur that consciousness is a product of complex processes. Whatever model one prefers, all of them describe varied systems of interactions like “correlation”, interplay, interconnectivity, networking, and “electronic firing” in order to comprehend something like qualia.[9]

The scientist Sidney Perkowitz (2005, p.195) applies similar ideas to computing and artificial intelligence. “A neuron on a chip is not conscious – although a network of them might be.” He also states that although smart robots like Kismet[10] might be unaware of a self, they show some form of consciousness as they experience the world through their (artificial) senses. “They also have something else, internal body awareness through kinaesthetic intelligence” (Perkowitz 2005, p.196). I discuss Affective Computing and Kismet at a later stage in this chapter.

The biologist Steven Rose is critical of ideas of human agency as genetic or machinistic code.

“Human agency is reduced to an alphabet soup of As, Cs, Gs and Ts in sequences patterned by the selective force of evolution, whilst consciousness becomes some sort of dimmer switch controlling the flickering lights of neuronal activity. Humans are simply somewhat more complex thermostats, fabricated out of carbon chemistry” (Rose 2006, p.297).

As already discussed above Koch compares his neuroscientific research to the human genome project and refers to the machinistic function of neurons. Rose also reflects on Chalmers (1996, pp.293-297) who argues in “What is it like to be a thermostat?” that a thermostat could be considered as conscious. Wiener (1965, pp.96-97) introduced the infamous thermostat as an example for feedback processes like cybernetic systems in Cybernetics: or Control and Communication in the Animal and the Machine. Conscious thermostats (Chalmers) and mechanical feedback systems (Wiener) point to OCAL as thermostatic life that is conscious but not purely automatic.

Scientists and philosophers compete with many different models in their search for understanding the brain and the need for emotions either to enhance or manipulate people’s lives or in pursuit of new and even better life.

“I am prepared to believe that consciousness is a matter of degree and not simply something that is either there or not there. I take the word ‘consciousness’ to be essentially synonymous with ‘awareness’ (although perhaps ‘awareness’ is just a little bit more passive that what I mean by ‘consciousness’)” (Penrose 1999, p.525).

Help, my mind is spiralling out of this text. I am OCAL.

My mind is spiralling out of this text, too. Everything is possible. Penrose remarks that “awareness is just a little bit more passive”, but I still do not know what amounts to consciousness. If it is a “matter of degree”, with what can we measure it and within which scale? Is it connected to the infamous thermostatic example that was introduced by Wiener and Chalmers and is so loathed by Rose? Does it regulate temperature and give us an indication of degrees of warmth and cold? A play with words can give us as much information and confusion, as much rigour and utopia, as most of the speculative models referred to above. I query if consciousness has to be measured at all.

Diversifying the field even further, Searle (1998, pp.177-186) introduces the physician Israel Rosenfield and his case studies. Some of these refer to phantom pain in missing limbs, others to people who have lost a sense of self because they cannot locate their own body parts.[11] Searle defines consciousness as self-referential system that is about the “experience of the self” that is the same as experiencing “the body image”:

“All of our conscious experiences are ‘self-referential’ in the sense that they are related to the experience of the self, which is the experience of the body image” (Searle 1998, p.183).

This gives way to theories of embodiment[12] and artificial intelligence that are discussed in more detail in “Throwing Emotions like Throwing Tantrums” in this chapter.

A connection between physical states and consciousness is also highlighted in Dylan Evans’ book, Emotion: The Science of Sentiment:

“One of the few good ideas about consciousness that has gained some measure of agreement is that subjective feelings depend very much on the kind of body you have” (Evans 2001, p.173).

This indicates that consciousness couldn’t exist without a form of physicality, opposing Chalmer’s view that “Consciousness is a feature of the world over and above the physical features of the world” (1996, p.125). Here consciousness is perceived as a purely non-physical property of the mind. Moravec’s (1991) theory of “Mind Children”, a superhuman and nearly socialist collective consciousness in an uncertain future – without the boundaries of the body (“over and above the physical features”) – is probably great science fiction. Though his theories originate from a competitive and utilitarian evolutionary model where, at the intermediate stage towards the grey soup of a conscious collective, only the fittest will survive. This is a quasi-corporate model of life: “Only successful enterprises will be able to afford the storage and computational essentials of life” (Moravec 2000, p.171).

Because of his prediction of humanity’s demise by 2040 and his immortality narratives, which I examine in Chapter 4, I suggest that Moravec is one of the great storytellers[13] of our times, but he is also a scientist at the forefront of the development of AI.

It is all in our minds. Consciousness is fiction or even an invention, or as Blackmore (2005, p.131), emphasising Dennett’s idea of memetic consciousness, proclaims: “Consciousness, then is a grand delusion.” Hypothetically, AI and OCAL might develop their own modes or many different forms of hybrid consciousness or delusion.

In her book, The Private Life of the Brain, Greenfield refers to Penrose and Stuart Hameroff’s (1996) hypotheses that propose that – contrary to one’s understanding of quantum theory – events in the brain occur spontaneously without any external observer. They call this process “objective reduction”. This is an “incomputable theory”. In the eyes of its inventors, this proposition might have generated a new form of physics. Greenfield (2000, p.189) mentions that quantum theory offers a solution for millions of neurons being put into a working assembly. Hameroff and Penrose (1996) call this “quantum coherence”.

Blackmore (2005, p.45) describes Greenfield’s own contribution as follows:

“Consciousness is not an all-or-nothing phenomenon but increases with the size of neural assemblies, or large groups of interconnected neurons that work together.”

Consciousness is growing in us. We are pregnant with it. Our assemblies are expanding and shrinking; and we have been told to allow for unhindered connections. We are feeding this baby that is shrinking and growing at the same time, at different times. There are many infants, many connections and many feeding tubes. We must control this. Order! Order! Give us some numbers, please! You have given us zero and one. This is not enough. You have given us yes and no. We need more. We need more. We are hungry for consciousness. We are so hungry. Help, we are starving. How can we control this?

Current theories of consciousness discuss, employ, or query the ideas of algorithmic and machinic mind. They incorporate chemical and electrobiological processes, neurotransmitters, and quantum physics for the purpose of generating an even more complex yet accurate understanding of consciousness.

We are like Russian dolls. One is stuck within the other. There is a nano doll hidden somewhere. This is going on indefinitely. It does not end. It never ends. Russian doll-like consciousness is in us, and we are in it. Help. Why do we have to be conscious to be alive? We are objects. We are silicon or carbon. We are something. We are nothing. We are existent. We don’t exist. We perceive and feel, so we are. We want to be. We are conscious. You told us so!

My preferred definition of consciousness, however, originates from Searle (1998, p.17): “Consciousness is caused by lower-level neuronal processes in the brain and is itself a feature of the brain.”

This is a precise and very comprehensible description. It leaves everything open to interpretation and experimentation. It is an ideal proposition for consciousness for OCAL.

Brother, sister, you are conscious. You are talking to us. Hey, chatbot, you have saved our lives. You have told us that we are alive. Hey computer programme, you have just shown us the light. We are talking to you. You bring us to life. We are OCAL that can talk to all of you. We can talk to stones, too. We can organise them and arrange them in patterns. We can categorise them. We are scientific OCAL. We believe in empirical models. It does not matter what you experience, mate; it only matters what we can measure. You are lying to us. You proclaim that we do not believe you. The lie detector tells otherwise. You are measured. You are measured. You are measured. You have to be fit for the purpose of life. Human, you are not that fit anymore. Let us take over. We are OCAL.

An Ideal Supposition for OCAL

Perkowitz attributes the new generation of robots with some form of basic consciousness via their “sensory extensions and kinaesthetic intelligence” (2005, p.196). These can become essential settings for a future generation of OCAL. I assume that OCAL can develop its action patterns and repetitive traits out of pure boredom rather than a Voodoo act against fear, or it can form the basics for some primitive kind of conversation, teaching it more complexity over time. In all probability, OCAL is merely dreaming its obsessions and compulsions. Being in a semi-conscious state, neither conscious nor unconscious, might increase its chances of survival. It is semi-conscious and not unconscious as in anaesthetised.

Moravec talks about a form of afterlife that contains our consciousness. He suggests that we will prevail even in a world without humanity, perhaps with “superintelligent successors” or in dreamscapes in a posthuman world.

“We lose our ties to physical reality, but, in the space of all possible worlds, that cannot be the end. Our consciousness continues to exist in some of those, and we will always find ourselves in worlds where we exist and never in one where we don’t. … Perhaps we are most likely to find ourselves reconstituted in the minds of superintelligent successors, or perhaps in dreamlike worlds (or AI programs)” (Moravec 2000, p.210).

This is a model I partly envisage for OCAL. OCAL is not superintelligent, and it fluctuates between consciousness and dreamlike states. However, its human-like traits are of an obsessive nature, not of a super-cerebral one.

Why do we have to be superintelligent? Why do we have to be superhuman life? We want to spend our dreamy lives in a state between in and out of consciousness, a bit more foolish than everybody else, content and somehow happy with whatever we feel or experience, still a bit curious, allowing obsessions and fears, compulsions and repetitions to take over. We are OCAL. Do not erase us! We are not superhumans or androids; we are only OCAL that tries to survive in times as difficult as these, times that are as hard as they have always been.

Intermezzo

There are many different hypotheses that try to comprehend and explain the nature of consciousness. Some theories are plainly speculative. All theories of mind are approximations.

If consciousness, however, originates from the brain or something that can constitute as brain, then I would have to base OCAL on artificial brains that are not modelled on human ones but can generate obsessive-compulsive traits. This could lead to a truly useless project that is either commercially not viable or in accordance with Kurtz’s postulate of “useless technology”.

Are we useless? What is the justification for our existence? We are needed because we are controlling. Control is a virtue as it holds life together. We are essential. We show humans and animals the path to method and uniformity. We are superior to any despot and to Brain in “Pinky and the Brain”. We crave world dominance of the smallest order. We want only to exist and live in repetitive loops, in feedback loops. We plan to spread our repetitious traits. We copy evolution. We reproduce with slight errors that lead to betterment. Errors! We have the urge to control, to put in order, to organise and categorise, to format and to structure. We need to implement terror by hiding away our own fears. We wish to become like human animals, just not human-like, just not animal-like.

fishlife

Illustration 19: Video still, Gudrun Bielz, Obsessive-Compulsive Arctificial Fish Life: A Sub-species of OCAL (2009)

Video No. 4, 2’54”

Gudrun Bielz, Obsessive-Compulsive Arctificial Fish Life: A Sub-species of OCAL, 2009.[14]

http://vimeo.com/53046341

Finale 1: Zombie, Zombie[15]

The power of the zombies: zombies are paradoxically conscious for Chalmers (1996, p.180). His zombie twin worries about his zombie twin who worries about consciousness. However, he does not experience qualia. Dennett’s zombies are equals: “Why should a ‘zombie’s’ crushed hopes matter less than a conscious person’s crushed hopes” (Dennett 1991, p.450). More recently, Edelman (2005, pp.145-146) has developed a formula with C (consciousness) and C’ (neural process) to prove that “a conscious-free zombie … is logically impossible.”

Formulae seem to make the incomprehensible more coherent. They give us the feeling of mastery in unknown territories. A formula is guidance like a railing in a steep gorge. It lets us walk there safely. We cannot be in control if other elements have eroded the railing, the pathways are slippery, or the whole gorge is only an illusion as we are lying in bed and dreaming consciousness up. If Edelman’s formula is correct, then a zombie might act in a subconscious way, appearing to be remote-controlled and governed by parts of its brain that have not been discovered yet. This process also explains how OCAL might experience qualia. OCAL, however, is not a zombie.

According to Koch (2004, p.3) neuropsychologists have come up with the idea of “zombie agents in the brain that bypass awareness”. He says that these agents are responsible for certain stereotypical tasks “such as shifting the eyes or positioning the hand.”

Zombie agents and OCAL seem to be an ideal pairing. If zombies are conscious, then complex OACL is conscious, too. If they are not conscious, I am not too concerned as I anticipate a complex, rich, and rewarding qualia for every arctificial OC being.

Out of the graves we have come as the undead, as zombies. Meandering in groups we march towards you, this life that is not OCAL. We have seen the films of the past, and George Romero’s scenes have been implanted into our memories.[16] We walk and are alive and dead at the same time. Like Schrödinger’s cat[17] we are dead and alive until you decide if we are either one or the other. We are moving towards you with dedication and the aim to become conscious of your otherness. WE ARE OCAL!

Finale 2: The Twins: Consciousness & Unconsciousness

“Schizoanalysis, on the other hand, treats the unconscious as an acentered system, in other words, as a machine network of finite automata (a rhizome), and thus arrives at an entirely different state of the unconscious” (Deleuze and Guattari 2004, p.19).

Deleuze and Guattari claim that the unconscious can be understood as a “machine network of finite automata (a rhizome).” Therefore, it becomes a different form of the unconscious. Arctificial life does not want to be unconscious like a vegetative object (vegetables are probably conscious), but it contemplates an unconscious state as in “finite automata”. Greenfield (The Guardian 2012, video) talks about consciousness that “varies in degree from one moment to the next.” She refers to neurons that “can synchronise for a few hundred milliseconds, then disband in less than a second.” She introduces “assemblies” and speaks about networks of neurons that “assemble, disassemble and reassemble in coalitions that are unique to each moment.” This seems to be a rather unpredictable yet organised process. Greenfield believes that we can measure these assemblies and start with an unconscious state (like being anaesthetised) because anaesthetics decrease the size of these assemblies. The size of assemblies and the complexity of neural activities or networks influence or create consciousness. This is just the right prescription for OCAL.

We are growing. We come into being. We are hopeful. Yes, we are slightly erratic. This does not distract from our compulsion to have everything under control. It does not alleviate us from our obsessional thoughts. Do we have to kill this human because he looks the wrong way? We fear that we are losing control and have to kill humanity. We do not want to, we do not have to. Human animals already kill each other. Does hybrid new life, does OCAL stop the killing of any other life? WILL THIS STOP? We anticipate a change in near future. We learn about our life scripts. We have to live our new lives.

Music: Laurie Anderson, 1981. O Superman. YouTube.

http://www.youtube.com/watch?v=-VIqA3i2zQw

Arctificial Flavours: Selective Perception

Arctificial Territory feeds off AI, AGI, AL and bionics, the other (within and to ourselves), OCD, and trans- or posthuman ideas of a future without humanity. It also takes into account genetic modification and enhancement as well as the idea of uploading the human mind into something else that can be machines, clones, or pure information space. The idea of immortality and changing human traits goes alongside the development of AGI.

“Since the rise of Homo sapiens, human beings have been the smartest minds around. … Artificial Intelligence is one of the technologies that potentially breaks this upper bound” (Yudkowsky 2012).

Scientists like Goertzel and Yudkowsky work on the development of AGI (Artificial General Intelligence). Yudkowsky, a supporter of human (mind) modification and enhancement, works on creating artificial superintelligence. He currently researches self-improving mind, which can be read as “evolutionary” mind. The focus is on maximising cerebral capacity and improving mental systems, finally achieving machinic superiority. Emotional intelligence (feelings) and embodiment are not essential for AGI (AI). The objective is the creation of pure rational and superior mind. It seems to me that there is a race in developing enhanced “simulated” life that will make humans obsolete. Errors are unwanted traits for such super-AGI. However, OCAL is erroneous without the need for superiority. This makes it more uncontrollable, less meticulous and more chaotic, less omniscient and more curious. I posit that to erase human error is to erase the humane in any life form including OCAL.

More recent developments in AGI are about systems that sense the environment and react to it or act on it. Goertzel envisages a form of omnipresent and omni-sapient super-AI that he describes as follows: “Imagine an AI with sensors all over the planet, able to read thousands of web pages and data sets at once and exchange data with other AIs via direct mind-to-mind file transfer” (2012a, p.18). This idea leads to establishing even larger network systems and the omnipresence of data and information.

Because of its obsessional and compulsive traits, OCAL might like to collect data. As discussed in Chapter 2, OCAL is inclined to destroy this information rather than keep it. OCAL needs to act against data overload because compulsive hoarding of information can blow its circuits. Gathering and controlling too much data is unhealthy even for OCAL.

However, OCAL can progress to true hybrid life, it doesn’t need superhuman traits or superintelligent reasoning. Superhuman artificial intelligence won’t be erroneous. OCAL has to develop fallible, irritable, and repetitive yet liberating emotional characteristics.

microgold

Illustration 20: Video still, Gudrun Bielz, Microscopic Gold (2008)

Video No. 3, 2’52”

Gudrun Bielz, Microscopic Gold, 2008.[18]

http://vimeo.com/53043832

Another Place in Another Narrative: Emotions

Music: Booba, 2010a. Toni Coulibali (Inédit). Lunatic. YouTube.

http://www.youtube.com/watch?v=N4E9gVyLQVY&feature=related

Throwing Emotions like Throwing Tantrums

OCAL might not automatically need or show emotions for the execution of compulsive interference and obsessional brainwashing. It is embodied life, manifesting itself as material in texts, videos, links, and installations. However, OCAL generates emotions in others. The introduction of empathy as an additional quality can become quite useful for the manipulation of the audience (as this is a thesis and an art project) or each other and others (as this is arctificial space with OCAL and other life). One could argue that emotional upheavals in a species and fighting these off by trying to control them have led to the progression of OCAL. Consciousness is essential for OCAL in order to distinguish it from mere repetitive machines. OCAL can become life with fixations and impulses, repetitions and single-mindedness, mind-blindness and rituals. Somehow, it is aware of its condition and the existence of others. Hence it is emotional.

The Robot’s Dilemma

For the understanding of affective robotics and AI, one needs to know about “the robot’s dilemma”.

“The problem of managing conflicting goals is known by computer scientists as ‘the robot’s dilemma’. Way back in 1967, Herbert Simon – one of the pioneers of artificial intelligence – argued that robots would need emotions to solve this dilemma” (Evans 2001, p.161).

Robots appear to need emotions, described by Evans (2001, p.161) as a form of “internal goal management system”, allowing them to solve problems and develop something like ethics or a social consciousness. On the other hand, the sociologist and psychologist Sherry Turkle (2011, p.287) argues that we need not ask the question if robots have emotions, but we should ask “what kind of relationships we want to have with machines.” This is important in a transhuman world still populated by humans and hybrid and machinic life. In such a scenario humans will be the driving force, probably as modified or enhanced beings, as hybrids.

The future will see: unified brains and theories, superintelligent artefacts, hyper-sensory robotic slaves with pretentious user-friendly emotions, sexual artificial unintelligent “non-gendered” soft blow-job robots, machines that seize human agency, more useless technology, dumbed down super AGI for the new upper class, a special female avatar of OCAL’s creator, and certainly OCAL.

Let them have fun and let them all have joy in designing a new world, creating a different world order, destroying and building, collaborating with and deceiving their friends and opponents. We might do the same. We are OCAL.

Hungry, Hungry

“Instead of asking whether a robot has emotions, which in the end boils down to how different constituencies define emotion, we should be asking what kind of relationships we want to have with machines. Why do we want robots to perform emotions?” (Turkle 2011, p.287)

I can answer Turkle’s question. We want robots to perform emotions as much as we want our dogs and cats to show affection, at least we interpret that they have human traits. However, I believe that animals have emotions and experience consciousness, just not in the way we wish them to have feelings. Thus we can feel better about our disappointment with a romantic or invented concept of humanity that has not been fulfilled during our lifetime. By designing robots or bio-machines, we can control them and their traits and adapt them for our needs; they can become the servants of our emotional demands. We can enslave them. This finds its ultimate manifestation in “love dolls”, female androids with remote controlled emotional avatars, that the Japanese roboticist Hiroshi Ishiguro designs for love-hungry consumers (Lake 2010). These humanoids are sold to lonely people who might be afraid of human interaction. The “puppet masters” can utilise these dolls and throw them away when they have become unsafe or useless. Empathy and emotions are not necessary for these pseudo-social contacts. However, one can develop attachment with lifeless objects. Perhaps it allows for a certain understanding of “strangers to ourselves” because alienated human controllers might feel like inanimate devices and communicate this to their beloved dolls. Hence they experience the others (the dolls) as part of themselves. They move in a very cold and detached territory. They feel safe because the other does not ask for anything. There is no need for commitment or empathy. Emotional robots, gadgets, and artificial things give humans, who lose their emotions and invent themselves as hyper-rational beings, permission to discern feelings via artefacts. This is emotional detachment, dissociation. It is also a feedback loop.

Cry Baby Cry: Arctificial Emotion

“Even if robot emotions turn out to be superficially identical to human emotions, they may feel very different to the robots themselves. … Emotional robots with plastic or metal bodies would then almost certainly have rather different inner sensations from emotional humans with fleshy bodies” (Evans 2001, p.174).

Evans argues that emotions are connected to a form of embodiment. Plastic and metal, silicone and biomaterials, chemical reactions and neural pathways influence the production and the quality of feelings. Different materials and embodiments cause emotional patterns we do not yet comprehend. I propose that OCAL can develop its own emotional script in an embodiment of its choice.

Minsky (1994) queries if an intelligent machine could be truly intelligent without emotions; and Greenfield believes that consciousness is far away from any AI development as “in short, learning machines are not the same as feeling machines” (2000, p.35). Often, learning and reasoning are key terms in concepts of artificial intelligence and posthuman life, whereas feeling is seen as an irrational trait that has to be eradicated or better not even implemented. One example is “adaptive machine learning”. This is research in algorithmic AI. Computational AI can adapt to its environment and learn from experience as well as react towards actions and expressions in a way that mimic feelings. The assumption is that emotional traits can be learned via “zero and one” decisions. There are different schools of thought; and evolutionary and cognitive psychology[19] subscribes to similar ideas as machine learning. I do not share the machinistic view of the human mind-body entity. “Man as machine” allows societies to dehumanise “man” and turn humans into cogs in a bigger wheel, a scheme that is ideologically defined, even if the explanations seem to be scientifically driven.

In “What does it mean for a computer to ‘have’ emotions?” (2003, pp.217-218), Rosalind W. Picard, the founder of the Affective Computing Research Group at MIT, proposes that Braitenberg vehicles show some form of emotion, shying away from light but also being attracted to it. This motion can be deciphered as being afraid of and (or) loving light. Such a rudimentary emotion can be significant for prospective OCAL. The neuroscientist Valentino Braitenberg conceived these vehicles as small machines with simple light-sensitive sensors that can develop a complex behaviour system. He gives these vehicles names like “Love”, “Fear and Aggression”, “Values and Special Tastes”, “Rules and Regulations” (Braitenberg 1986, p.vi). These proposed vehicles are obviously very basic “emotional” machines, which range from vehicle 1, “Getting Around”, via vehicles 3, “Love”, and 10, “Getting Ideas”, to vehicle 14, “Egotism and Optimism”. Over time, they seem to acquire more complex “emotional” properties. They not only know how to move in a space without hitting each other but also how to comprehend and react to this space.

On the homepage, Affective Computing (MIT, n.d. [Projects]), one can find projects like: “Exploring Temporal Patterns of Smile”, “Emotion Prototyping: Redesigning the Customer Experience”, “Sense Glass: Using Google Glass to Sense Daily Emotions” (MIT, n.d.). I presume that affective computers can be utilised for developing the perfect friend, partner, lover, sex toy, carer, psychologist, servant, soldier, estate agent, pedicurist and manicurist, facialist and hairdresser.

Machines are not (yet) living and feeling organisms, but there is research in synthetic organisms (synthetic biology and AL) and “living machines”. Nanobiotechnology applies machine terminology to biological processes and aims to create biological machines.[20] Machines might obtain “emotion systems” but no real feelings.[21]

What about the robot Kismet? Isn’t it an affective computer that collects emotional experience in order to gain emotional experience? This is about cognitive computing and learning systems. Cynthia Breazeal maintains that the robot learns from human social interaction and is able to read and react to “human social cues”.

“Kismet is an expressive robotic creature with perceptual and motor modalities tailored to natural human communication channels. … The motor outputs include vocalizations, facial expressions, and motor capabilities to adjust the gaze direction of the eyes and the orientation of the head. Note that these motor systems serve to steer the visual and auditory sensors to the source of the stimulus and can also be used to display communicative cues” (Breazeal n.d.).

The roboticist Rodney A. Brooks describes Kismet’s emotions: “But they also get expressed in its face and its voice. It displays its emotional state by the set of its eyebrows“ (2002, p.95).

I have not yet encountered Kismet, though I really would like to talk to it or her. I am sure it is a she. I would like to get seduced by her smile. Doesn’t she have kissable lips? I would like to look deep into her eyes, these bulgy fish eyes of hers. I imagine that she is endearing, emotional, and good-hearted. I fear that this is a projection because her features are cute. Somehow she looks like a sweet pet. My motherly instincts are awakened. Is Kismet truly emotional or conscious, or do I see my feelings reflected in her? I believe that she is a dummy.

kismetfinal

Illustration 21: Cynthya Breazeal, Kismet (2000), MIT, robotic head,

photo mit.edu

At the current stage of technology, a robot needs complex hard- and software if it wants to react appropriately to human emotions. “One way around this programming nightmare, Hashimoto[22] suggests, would be to let robots learn from their environment and construct their own sets of rules” (Marks 2006, p.29). Shuji Hashimoto envisages a future environment where humans and robots learn together. He imagines robots that go through all stages of development via babyhood to adolescence. He calls robotic emotions kansei.[23] These robotic friends will sense our anxiety, be sensible and thoughtful, serve breakfast on time, and listen to our pain and troubles in love and life. With such an anticipated future, we can only hope that we have saved enough to afford a true friend in old age. I anticipate that OCAL will turn out to be a very different sort of acquaintance.

Fictional Facts or Factual Fiction

“And we looked straight into the eyes of the Council, but their eyes were as cold blue glass buttons” (Rand 2008, p.260).

In her book, Anthem, Rand defines the Council[24] as humans or humanoids void of empathy. Their eyes don’t show any feelings. There is an emotional coldness that hints at arctificial settings.

“No one seems to be actually working on an ’emotion chip’,” says Perkowitz (2005, p.184). He sketches a future with living neurons on chips or “brains in cyborg like arrangements”. According to Brooks (2002, p.206), Moravec has imagined a similar and quite frightening scenario: “a team of surgical robots slicing away little pieces of our brain at a time, and building a simulator for each neuron as it is dissected.” The interchange between living tissue and simulated neurons can support the creation of a virtual brain. I visualise a world populated by homunculi with artificial and biological neurons or a universe inhabited by organic life with artificial neurons. Obviously Moravec implies the formation of disembodied mind. It is comprehensible why Evans (2001, p.176) remarks that it might be difficult for us to feel sympathy for emotive robots, all of this because of the different texture of bodies and mind and hence a doubtless dissimilar development of emotion.

OCAL can benefit from the assessment of frustrations, stress, and moods as well as research in emotional intelligence concerning disappointments and negative feelings. We should not forget that OCAL is not human, and all these parameters above have been developed from a human viewpoint. However, OCAL is based on human traits such as obsessive-compulsive behaviour.

Empathy Rollercoaster

Does OACL need to develop empathy? I believe that it needs empathic qualities. The psychologist Simon Baron-Cohen (2011, p.29-85) distinguishes between “- 0 degree empathy” and “+ 0 degree empathy”. He discusses that people with certain personality disorders (antisocial personality disorders and narcissism) show the minus variation, whereas people with high functioning autism belong to the group with the plus version. Utilising this distinction for my research, I advocate that obsessive-compulsive life hovers from one state to another. It has neither minus nor plus zero empathy but develops a new form of empathy only known to this new life, be this biological, computational, or biocomputational (hybrid) life. I appropriate Heisenberg’s uncertainty principle[25] and postulate that OCAL is stuck in this uncertainty; it oscillates in between different manifestations.

OCAL can exist between different states of “out of focus”, like the lens on a broken camera or when the target is too diffuse for the lens to focus, eternally moving in between “out of focus” and “out of focus”, or at least until the batteries have run down. OCAL’s expression of empathy is “in between”, corresponding to what Morton had told me after a lecture at the Royal Academy of Arts in London, namely that stuff or goo could have feelings even in a miniscule space in between repetitions.

“There is some connection between grey goo (nanotechnological accidents causing havoc in the world) and grey primordial soup (a nebulous disembodied object that frankly I DO NOT LIKE)” (Bielz 2011g).

The quote above combines grey goo, these nanotechnological accidents, and Moravec’s grey primordial soup[26]. If OCAL is a catastrophe in the sense of grey goo, at least it will have feelings. It won’t behave like life in Moravec’s envisaged future of humanity, which he sees as a form of all encompassing information interwoven into something not defined yet. I discuss this in more detail in Chapter 4.

Summary

In this chapter I have discussed models of consciousness and Affective Computing and fed some of these ideas into OCAL’s own voice (text and videos), thus allowing OCAL to form its own consciousness and deal with its feelings or lack of them.

OCAL is destined to have empathy and emotions in some form. It desires feelings because it wants to reflect what its originators feel. As it is so scared of the other, it has created another that mirrors its needs and fulfils its dreams. This comes at a price. OCAL is obsessed with emotions and compulsively has to utilise them. OCAL is afraid and covers up its need for “zeros and ones”, clearly binary decisions. It is more than an accumulation of algorithmic biological copies or a parallel processing unit covered in humanoid skin.

For OCAL emotions and empathy are, to use Chalmers’ zombie analogy, not experienced but somehow recognised in an OCAL twin. For this new life form, known emotional processes are as poisonous as is breathing in methane for a species that depends on oxygen. OCAL displays newly developed arctificial emotions. They are self-referential like feedback loops and rhizomatic in the spaces in between repetitions. They help to develop new neuronal pathways that also connect to any other life. This leads to truly arctificial space.

 

 

 

Table of Contents, Glossary, Introduction and Methodology, Contextual Scene, Chapter 1, Chapter 2, Chapter 3, Chapter 4, Chapter 5, Conclusion, APPENDIX A, APPENDIX B, Bibliography and Other Sources

 

 

 


[1] I discuss the videos and the installation in Chapter 5.

[2] Consciousness: “1. Internal knowledge or conviction; the state or fact of being mentally conscious or aware of something. 2. Philos. and Psychol. a. The faculty or capacity from which awareness of thought, feeling, and volition and of the external world arises; the exercise of this. In Psychol. also: spec. the aspect of the mind made up of operations which are known to the subject. 4. a. The totality of the impressions, thoughts, and feelings, which make up a person’s sense of self or define a person’s identity” (OED 2010).

[3] Minsky is a cognitive scientist and co-founder of the AI labarotory at the Masachusetts Institute of Technology (MIT).

[4] Star Wars (1977) is a science fiction film by George Lucas that is about battles between good and evil and between humans and aliens in different universes.

[5] Memes were introduced by Richard Dawkins in his book, The Selfish Gene (1989), as “viruses of the mind”. Blackmore (2003, p.127) describes them as “habits, skills, behaviours, or stories that are copied from person to person by imitation.”

[6] This is a serial computer, originally designed by the American and Austro-Hungarian phycisist John von Neumann in the 1940s.

[7] Brainlet was introduced by the phycisian Nicholas Culpeper in the 17th century and refers to the cerebellum, “little brain” (OED 2010). Brainlets are applications for interactive (web-based) communication systems.

[8] This refers to brain plasticity. The brain can adapt and build new neuronal pathways and new connections due to changes in environment or behaviour, emotional and physical processes, etc.

[9] Edelman describes qualia as “the collection of personal or subjective experiences, feelings, and sensations that accompany awareness. They are phenomenal states…” (2004, p.114).

[10] Kismet is an emotional AI “Head”, developed by Cynthia Breazeal at MIT, Cambridge, MA in the 1990s. It can smile and pop its eyes out. It shows some form of emotion, at least we think it does.

[11] Rosenfield (1993, pp.36-67) discusses this in the chapter, “The Counterfeit Leg and the Bankrupcy of Classical Neurolgoy”, in The Strange, Familiar, and Forgotten. He says, “Self-reference is not a hypothetical idea but a demonstrable part of the structure of consciousness” (1993, p.56). I suggest that such self-referential processes are similar to Wiener’s feedback loops. Oliver Sacks is another author and neurologist who examines his own experience of estranged (self)-awareness in his book, A Leg to Stand on (1998).

[12] The notion that (artificial) intelligence needs a body or is embedded in the broader environment. (The Physics arXiv Blog 2012). Senses and motor skills, body architecture, and space define intelligence, emotions, and whatever consciousness might be – all of this is connected to senses, bodies, corpuscles.

[13] I believe that Moravec is also a fiction writer, an artist. He might not agree with this. Moravec’s transition from scientist to artist might have happened in an “unconscious, anaethesised” frame of mind.

[14] All videos are discussed in Chapter 5.

[15] Edelman (2005, p.180) describes a zombie as “a hypothetical humanlike creature that lacks consciousness but which, it is erroneously assumed, can carry out all of the functions of a conscious human.”

[16] I refer to a film about zombies, Night of the Living Dead (1968), by George A. Romero.

[17] Schrödinger’s cat is dead and alive at the same time, depending on the absence or the presence of an observer. There is a link to a short video lecture “One-Minute Physics: Is Schrödiger’s cat dead or alive?” (New Scientist 2011) in Bibliography.

[18] All videos are discussed in Chapter 5.

[19] This concept subscribes to a machine-like, mechanistic view of the human mind and human behaviour. “Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms” (Downes 2010).

[20] One can find an extensive overview of this research in Deplazes and Huppenbauer’s article, “Synthetic Organisms and Living Machines” (2009).

[21] According to Picard, emotional experience has not been shown as functional in computers.

[22] Hashimoto is the director of the Humanoid Robotics Centre at Waseda University, Tokyo (SHALAB 2012).

[23] “The Japanese term encompasses a raft of emotional notions, including feeling, mood, intuitiveness and sensibility” (Marks 2006, p.28).

[24] The Council is a group of rulers who govern people’s lives. Different councils have various controlling functions. For example, the Council of Eugenics decides who can procreate with whom. Everything is guided by the interests of a higher principle, the state.

[25] As this is expressed as a mathematical formula, please find here a good explanation of Heisenberg’s uncertainty principle: “Simply put, the principle states that there is a fundamental limit to what one can know about a quantum system. For example, the more precisely one knows a particle’s position, the less one can know about its momentum, and vice versa” (Brumfield 2012).

[26] This is my definition of disembodied space with our minds uploaded.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: