Home: Scrying: The Computer and the Consciousness

The Computer and the Consciousness

January 11, 2006

Here is a recent paper of mine, describing how a computer might soon have a consciousness equivalent or surpassing the human consciousness: a bit of AI theory mingled with a touch of neuroscience.

When I got the paper back from my philosophy instructor, it had a perfect score and hardly any marks. I balked. (I’m one of those self-critical perfectionist types—it couldn’t have been 100% without an editor!) When I approached him about it, he told me it was one of the best arguments he had heard on the subject, and couldn’t find the flaw in my argument—although he admitted he couldn’t agree with me. 

Perhaps he didn’t want to agree… the idea of an intelligent computer scares the crap out of some folks. I believe these sort of fears have a common basis with fears of biotechnology, linking the two subjects. That, of course, is another issue for another day.  

The Computer and the Consciousness


By Karmen Lee Franklin


One of the most enigmatic goals of scholars has been to grasp the seat of consciousness. Ancient Egyptians believed such a force was located in the heart. This made good sense; the heart, suspended in the center of the body, pumped life-giving blood to every limb and tissue. The lump of gray goo in the head, on the other hand, was discarded during mummification rituals and considered essentially useless. Today, researchers busily scan and study that “useless” lump of goo. Rather than use the knowledge for funerary rituals, however, researchers have more noble goals (at least, by modern perspectives,) such as explaining the debilitating effects of diseases, like Alzheimer’s and Parkinson’s, or, as discussed here, reproducing consciousness in a machine. Is it possible to replicate a force as complex and elusive as the human mind? Is a computer as unlikely to have a mind as a mummy is to walk away from a pyramid tomb? Or has technology evolved to a level where consciousness can be mapped, and artificial intelligence (AI) is inevitable, as recent advances in neurobiology suggest?

While the distinctions between the body and the mind have been argued over for centuries, the debate gained considerable force with the dawn of the digital age. The uniqueness of the human mind could no longer be explained as simply “a center of reason” when, suddenly, a machine could reason (by performing arithmetic or winning a chess game) at a rate surpassing the above-average human. This became especially evident when the IBM computer, Deep Blue, defeated the world-champion, Gary Kasparov, in a famous round of chess (Lawhead 233.) Philosophers began to name other criteria for defining the mind.

Traditionally, physicalists, on one side of the debate, strictly believed the mind to be an objective and determined product of the brain, which interacted with the environment. They often doubted a machine could match the mechanical sophistication of the human mind in areas such as language, reason, and emotion. Dualists, on the other hand, believed the brain and the conscious mind to be distinct and separate entities. They argued that a computer might potentially simulate the analytical brain, but it could never have empathy or be creative, let alone be “spiritually” self-aware like the intangible human mind.

In the late 20th century, a new view arose. Seeking a compromise that could result in progress towards AI, the new breed of philosophers shifted the focus to the unity of the interconnected parts. This view, called functionalism, suggested the essence of the human mind lay in its complexity, rather than in any one individual aspect. Jerry Fodor described this view by saying “in the functionalist view the psychology of a system depends not on the stuff it is made of (living cells, mental or spiritual energy) but on how the stuff is put together” (Fodor 273.) Functionalists believed the concept of intelligence to have multiple realizability; in other words any form might be possible for intelligence. Wine can be found in a bottle, a flute, a flask, a chalice, or even a box lined with Mylar, yet it is still wine; likewise, any vessel may conceivably harbor a mind.

Has technology advanced to a level at which the entire human mind, rich in complex aspects, can be explained and defined? Some philosophers believe we are close to such answers, and as a result, to the search for AI. Marvin Minsky is one of the hopeful. “In years to come,” he writes, “….we will come to think of learning and thinking and understanding not as mysterious, single, special processes, but as entire worlds of ways to represent and transform ideas. In turn, those new ideas will suggest new machine architectures, and they in turn will further change our ideas about ideas” (Minsky 243.)  In the tradition of functionalists, Minsky and his colleagues believe a sufficiently complex computer would be considered to have a mind. This is referred to as the strong AI thesis.

The opponents of strong AI argue that even a complex computer is only a simulation, as John Searle illustrated with his example of the Chinese room (Lawhead 243.) Imagine a person was locked in a room, given cards containing Chinese ideograms and asked to write out a response. Since he had no previous exposure to the Chinese language, he was forced to rely on a set of instructions for composing sentences in the correct syntax. Eventually, he was able to produce cards which would be perfectly understandable by anyone able to read Chinese. The trouble is, as Searle shows, the man in the room would not be able to understand what he wrote. He was only mimicking the process of language, rather than using it. Similarly, a computer could be capable of processing language well enough to fool a human speaker of that language (criterion referred to as the Turing Test) and yet still not have a human level of understanding.

Searle’s argument seems convincing, but still applies to relatively simple levels of computation. In the Chinese room example, information is processed in the form of symbols and rules, but these bits are not applied to any sort of perception, as at the human level. If the man in the Chinese room were given a form of sensory perception, such as a window through which the actual Chinese speakers could be seen and heard translating his messages, he might be capable of understanding them. Potentially, a complex computer could perform as well. In a similar sense, a computer equipped with sensory perception, processing skills, and proper tools or mediums, could possibly create art or music that some human observers would find aesthetically pleasing.

Many of these abilities, such as those influencing creativity, once thought to be incorporeal, have been identified as activity in certain groups of neurons in distinct regions the brain. Emotions, for instance, begin as neural signals in frontal lobe, located behind the forehead. (See figure 1.) From the frontal lobe, they are sent to the hypothalamus at the base of the brain, which is used to trigger chemical reactions throughout the body. Visual perception, on the other hand, occurs in the back of the brain, in the occipital lobe (Grubin 1.)

Can these sorts of processes explain such ethereal concepts as consciousness? John Searle has been skeptical. In his book Mind, Language, and Society, he suggests that once the chain of reactions in the mind for an event has been explained, there is an “irreducible subjective element” left over: consciousness. For instance, he contrasts consciousness with metabolism. “Once you have told the entire story about the enzymes, the rennin, the breakdown of carbohydrates, and so on, there is nothing more to say. There isn’t any further property of digestion than that…. But in consciousness, the situation seems to be different” he explains (Searle 55.) (He does make the caveat that each process can be reduced to atoms and quarks, however.) He summarizes this view, saying, “The subjectivity of consciousness makes it irreducible to third-person phenomena, according to the standard models of scientific reduction” (Searle 55.)Figure 1

 

It seems Searle felt it would be impossible to scientifically pinpoint a sense of awareness in an observable manner. If so, an article in the November issue of Scientific American may have him eating his words. In the article, “The Neurobiology of the Self”, writer/scientist Carl Zimmer describes how scientists recently identified the sections of the brain which are responsible for the sense of self. Essentially, the anterior insula, near the center of the brain, activates when a person is actively thinking of themselves (such as when seeing a picture of their own face) (Zimmer, Neuro. 98.) These signals are sent to the medial prefrontal cortex, near the forehead. There, they are combined with autobiographical memories retrieved from the membrane linking the left and right hemisphere of the brain called the precuneus. Together, this network defines consciousness—independently of the networks used for memories and thoughts about external substance. Zimmer believed it is one of the most distinctively human traits yet discovered. “Humans have evolved a sense of self that is unparalleled in its complexity,” (ibid.) he wrote.

Compared alongside the evolution of life, our technology has barely crawled from the ocean onto land. (See figure 2.) The evolution of thinking machines has at times, like ecology, undergone dramatic explosions, so that now the possibility of a computer with a mind is not only likely, but inevitable. The hurdles in front of AI, once impossibly high, are being removed; one by one. Neurobiological research of the brain has led to the development of many drugs, ranging from those which can disorders such as depression or Alzheimer’s, to those that can increase logical abilities and raise IQ scores (Gazzaniga 33.) Next, scientific and philosophical inquiry may finally provide a functional model which can be used to synthesize an artificial, yet complex form of intelligence, encompassing aspects such as reason, creativity, language, and a sense of self. It may not be long before technology rivals the collective abilities of humanity. Then, perhaps those machines will question the potential for artificial versions of themselves.














































Figure 2. The Evolution of Ecology & Computing

Ecology
Computers
Chemicals necessary for life present
»4.4 BYA
»15,000 years ago
     

Creatures capable of building computers present
 

     

Amino acids formed in oceans
 

3.8-3.5 BYA
3,000-2,400 BCE
Abacus formed in Mesopotamia and China
     

First multi-celled organisms
 

2.7-1.8 BYA
1623 AD
First mechanical adding machine
Life diversifies (Cambrian explosion: limbs, skeletons, etc)
»535 MYA
1940s
     

Machines diversify
(Digital age: calculators, transistors, computers)
 

     

Complex animals move on land and cover the earth
 

450-360 MYA
1980s-1990s
Computers move into homes and offices, cover the and earth
     

Land animals complex enough to use tools, language, be self aware, and capable of disrupting the ecosystem appear (like humans)
 

»15,000 years ago
?
Computers complex enough to use tools, language, be self aware, and capable of disrupting their world (and humans) appear

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Sources

3-D Brain Anatomy. Director David Grubin. “The Secret Life of the Brain.” PBS. February 2002. < http://www.pbs.org/wnet/brain/3d/>
Decarlo, Finkelstien, Rusinkiewicz, Santella. Suggestive Contour Gallery. Princeton: 2005. (Source for Figure 1; modified and labeled) < http://www.cs.princeton.edu/gfx/proj/sugcon/models/>
Fodor, Jerry. The Philosophical Journey. Ed. William F. Lawhead. New York, NY: McGraw-Hill, 2006. 236-237.
Gazzaniga, Michael S. “Smarter on Drugs.” Scientific American Mind. Vol. 16, No. 3. 33-35. 2005.
“History of Computing.” The Great Idea Finder. 2005. (Source for Figure 2.) < http://www.ideafinder.com/features/smallstep/computing.htm>
Lawhead, William F. The Philosophical Journey. New York, NY: McGraw-Hill, 2006. 241-249.
Minsky, Marvin. Quoted in: The Philosophical Journey. Ed. William F. Lawhead. New York, NY: McGraw-Hill, 2006. 242-243.
Searle, John R. Mind, Language, and Society. New York, NY: Basic Books.1998. 55-56.
Zimmer, Carl. “The Neurobiology of the Self.” Scientific American. November 2005. 92-101
Zimmer, Carl. Evolution: The Triumph of an Idea. New York, NY: HarperCollins, 2001. 70-71. (Source for Figure 2.)