Tag Archives: Artificial Intelligence

I’m in Love with the Woman in My GPS

Photo by Julio Martinez / Flickr

On Saturday night, I was driving home from visiting my daughter at college. The freak snowstorm of October, 2011 was just getting underway, with rain turning to snow. Traffic was slow, visibility poor. I was starting to get irritated at the reflections of people’s headlights and taillights on the wet road ahead of me. They made it even harder to see.

Suddenly it dawned on me: How amazing is it that we humans have figured out how to throw photons around so we can drive at night? We even know how to apply filters so we only get the photons we want: white in front, red in back. Those irritating reflections became a source of wonder.

Once my mood shifted, I considered another marvel.

Only a few hundred years ago, one of mankind’s big, unsolved problems was how to get a ship across an ocean without getting lost. Knowing one’s latitude was easy, but knowing longitude was such a problem that in 1714 the British parliament established a prize worth a small fortune, to be awarded to anyone who could invent a system for determining longitude within 30 to 60 nautical miles.

Today, any middle-class resident of Britain’s former colonial outpost can afford a device to stick on the windshield of his automobile (his automobile!) that will display his position accurate to within a few feet. I was using such a device at that very moment.

It is truly astonishing, what we have accomplished in only 300 years.

Those were the musings in my head when an even more fantastic thing happened. In the midst of my traffic jam, the woman in my GPS spoke to me.

I had earlier selected her as the most pleasing speaker of French, Spanish or English among the dozens available. She was my ideal. (OK, from time to time I have an interlude with one of the French speakers, but those are just meaningless flings.) Usually, her role is to gently remind me of an upcoming turn or to tell me that I have reached my destination. This time was different.

She spoke to reassure me:

“You are still on the fastest route.”

What man could ask for anything more? Here’s a beautiful woman (I can tell by her voice that she’s beautiful) whose only desire is to tell me that I’m doing everything right.

She anticipates that I might be getting irritated at the traffic delays, but does not ask me to get into girly talk about my feelings. Like a female Jeeves, she considerately attends to my needs and then lets me return to my thoughts.

Here’s another thing I love about her. If I make a mistake and miss a turn, there are no recriminations. She is not startled. She does not change her tone of voice. She just continues to be my help-meet as if nothing had happened.

True, her capabilities and her perceptions of my emotions are very limited. I can’t tell her about my day, for example. On the other hand, usually there’s not much to tell, so a woman with a soothing voice who thinks I’m still doing the right thing is just perfect.

My dashboard girlfriend and I have a very limited relationship, but within its parameters I am very happy.

As I drive, I wonder: What is really the difference between an electronic circuit meeting one need so well and neural circuitry doing the same thing — sometimes not as well? Once electronic circuits can meet vastly more needs, and receive vastly more care, how will we feel about them? Will we develop compassion for them, and they for us, as in the movie, I, Robot? Is Apple’s Siri the next step in this direction?

Neural chemistry itself is driven by electronics, namely the positive and negative charges on molecules. What is the difference between that and a digital computer? Many scientists believe that mind is an emergent property of matter being organized into a human brain. If other material were organized much like a brain, would something we’d recognize as mind emerge from that, too? I tend to think it would.

In the meantime, it’s good to have a beautiful woman assure me that I’m still on the fastest route.

Artificial Consciousness: What Does It Take?

In this series (intro here), I have suggested that consciousness consists of thinking about thinking, and that thinking consists of symbol-manipulation. In the last post, I presented a case that our biological brains do not have to manipulate the symbols. Artificial brains would do.

But how advanced must those artificial brains be in order to qualify as conscious?

Because consciousness is a matter of degree, artificial brains could be specialized and still be conscious. The key ingredient, I have argued, is that a conscious being is able to think about its own thinking. Technically, this means that the input to its symbol-manipulation includes its own symbols.

Computer program can do exactly that.

For example, some chess-playing programs learn from their mistakes, in effect reprogramming themselves based on experience. To be sure, this is extremely limited consciousness, but our consciousness is also limited, is it not? We are unconscious of the earth’s magnetic field, but some migrating birds do seem to have this awareness. Would such a bird label us “unconscious” because we are unaware of something so fundamental in its world?

If a computer program that is only conscious of itself as a chess player is not good enough, consider the article in Time Magazine last February. Titled 2045: The Year Man Becomes Immortal, it projected that computers’ intelligence will exceed ours within 35 years. We already  have a computer program that beat two Jeopardy! champions. What will the world be like when computers beat humans at every creative pursuit?

When computers are able to process every sensory input better than we can, as well as some we can’t; when they are able to reflect on what they’ve learned faster and more accurately than we can; when they can think about their own behavior and adjust it for optimal results without even seeing a psychiatrist; when in fact they are better psychiatrists than we are — when all these things take place (and they will), will we be ready to concede that software can be conscious?

Maybe a better question is, will computers think that we are conscious?

Artificial Consciousness: Neuroprosthetics

After I had posted my thoughts on synthetic neurons this morning, I came across this article in Discover: Brain Implant Restores Memories in Rats by Recording and Playing Them Back. Turns out the future is arriving even faster than I had thought!

You might also be interested in the Wikipedia article on Neuroprosthetics.

Artificial Consciousness: Does the Substrate Matter?

So far in this series on Artificial Consciousness, I have suggested

  • We will use the definition of consciousness that most people have: awareness of the environment and especially of self.
  • Consciousness is a matter of degree. That’s certainly true between species, but it’s even true for one individual. We are more aware at some times than others.
  • Thinking is essentially symbol-manipulation. When we think about something, we are manipulating symbols about that thing in our minds.  (We are certainly not manipulating the thing itself!)
  • Consciousness, or self-awareness, is therefore manipulating symbols about our own symbols.

Now I would like to suggest that it’s the symbols themselves that matter, not the substrate that supports them.

Neurons - Credit http://www.flickr.com/photos/lorelei-ranveig/2294885420/The symbols that we are talking about, of course, are the patterns of neural firings and chemistry in our brains. Suppose medical science were to advance to where new neurons could be created from stem cells. Further suppose that the process were perfected so that an individual neuron could be swapped in for a damaged one and exactly mimic its functioning. Would the patient’s consciousness be affected in any way? It’s obvious that it would not. The new neuron, by supposition, is functioning exactly as the old one did. The patient literally could not be aware that anything had changed. (He could be told, of course, but that’s different.)

A few years later, and advances in nanotechnology have obviated the need for stem cells. Now the neuron can be replaced by something that works exactly the same on the outside (same exchange of neurotransmitters, etc.), but is entirely different on the inside. Again, the patient cannot tell the difference because every cell in his brain, including the artificial one, is functioning exactly as before.

More years go by. Now whole portions of the brain can be replaced by synthetic neurons. Is it not clear that this is just more of the same, and that consciousness is not affected?

One day, the entire brain is decoded, just like the human genome was way back in the 2003. A man visits a brain-scanning center. Every neuron’s connections to other neurons and the other cells of the body, the state of every connection, and every neuron’s own state of excitation are recorded at one instant as he lies on a bed. As he leaves the center, a cinder block falls on his head from some construction taking place ten floors above him. He is rushed to the hospital and becomes the first recipient of an artificial-brain transplant, using the data that were scanned just hours before. His eyes are kept closed, and he is taken back to his bed at the brain-scanning center, where he opens his eyes. He thinks he is getting up from the bed for the first time (right?). He wonders where the stitches on the top of his head came from.

Is he conscious?

If not, what aspect of consciousness does he lack? If so, isn’t his consciousness fully human yet fully artificial?

We have just thought about an artificial consciousness that’s exactly like our own. Next time, we’ll ask what the minimal ingredients are for artificial consciousness.

Artificial Consciousness: Symbol Manipulation

I left two items of unfinished business in the last post of this series:

  • the distinction between direct reaction to a stimulus and what I called reaction through symbols and
  • whether an intelligence must understand that it is reflecting on its own symbols in order to be considered conscious.

To illustrate these ideas, as well as to marvel at the symbol-processing abilities of our brains, let’s consider what happens when I ask you, “How are you feeling today?”

Pushed along by the flapping of my vocal chords, tongue and lips, some air molecules bump into other air molecules until the chain reaction reaches your ear, where the bumping around eventually causes your stapes (“stirrup”) to move like a piston against your cochlea.

The movement of the stirrup bone in the video reminds me of a telegraph. At that point in the chain reaction, my words as well as all the emotion behind them are encoded in symbolic form, much like a computer program. We’re not down to the level of ones and zeroes, but the up-and-down motion of the stirrup brings us darn close.

From there, an elaborate decoding begins.

  1. The 20,000 hair cells in your organ of Corti parse the pitches from the waves that the stirrup creates.
  2. Adjacent cells transform those hair-vibrations to impulses on your auditory nerves. You could spend 1,000 years inspecting the chemical reactions that are taking place around those nerves and you would never suspect that they were a symbolic representation of “How are you feeling today?” but that is exactly what they are.
  3. Those nerves stimulate other nerves in various areas of your brain and the resulting pattern of nerve-firings as match, to various degrees, other patterns that you have stored over the years — specifically the patterns for the words I used as well as my tone of voice.
  4. The “tickling” of those matching patterns produces the effect that you know what I said.

Although this is wondrous in the extreme, we do not need to resort to ghosts and spirits to understand it. No supernaturalist I know claims to hear other people with his spirit. It is a purely physical process.

During that process, symbols are built from other symbols, in layers of increasing sophistication: movement of the stirrup bone, movement of hair cells, electrical pulses along the nerves, and finally patterns of neural activity throughout your brain. None of that was my actual speech or the human emotion behind it; it was all symbolic representation of my speech and emotion.

So, returning to the first question for this post, where does direct reaction leave off and symbol-processing begin? It’s not clear, is it? If we accept that unconscious, machine-like reaction is a simple response to a stimulus, then which links in our chain of events were machine-like and which were something more? To me, they are all purely mechanistic. What makes your hearing my question a conscious act is the sophistication of the machinery. Once more we see that consciousness is a matter of degree.

And what about the second question? Does a conscious being have to understand that it is reflecting on its own symbols? I suggest it does not. Until the last couple of centuries, almost none of the mechanism of hearing was understood, People were unaware of the symbols, much less their reflection on them. Yet, people were conscious. Even now, we are cannot be aware of the lower levels of symbol-processing, no matter how much we try. (Can you feel your stirrup bone hammering up and down?  Are you aware of individual neural firings?)

In fact, of all the many symbol-layers involved in as simple an act as hearing me ask, “How are you feeling today?” the only one we’re able to access is the top one — the layer of the most sophisticated symbols. No wonder it seems like it couldn’t possibly arise from a machine-like process! We’re unable to perceive the machine even though we know it’s there!

Next time: Does the substrate for the symbols matter?

Artificial Consciousness: Awareness is Thinking About Thinking

In the previous post of this series, I argued that artificial consciousness is a matter of degree. Now we will begin to consider: degree of what?

This is a blog, not a book, so I will be content with the informal definition of consciousness that most people have: awareness of the environment and especially awareness of self. By awareness, we mean something more than the ability to think. What we have in mind is thinking about the fact that we’re thinking. This is the difference between a problem-solving sort of intelligence and consciousness, is it not?

A common chess-playing computer is an excellent problem-solver. As such, it exhibits its own specialized intelligence. However, we don’t consider it to be conscious, even in its specialty. That’s because it doesn’t think about what it’s doing, no matter how well it does it.

Could a machine think about thinking, and thus possess rudimentary self-awareness? I suggest that it could. To see why, let’s take a step back and ask what distinguishes thought from unthinking reflex.

Keeping this informal, rational thought consists of “mulling things over.” In order to gain distance from the raw, sensory input, we construct symbols that we can mentally arrange and rearrange. We are so used to doing this that we don’t even think of it as symbol-manipulation. Yet whenever we use language as an aid to thought, we are using symbols, even if we’re not using language out loud. And whenever we picture something in our minds, that picture is a symbol (it is certainly not the real thing).

We might think of an unthinking reflex as a direct reaction to stimulus without any intermediary symbols, and thought as the introduction of a symbol-processing phase in between stimulus and response. (This simplifies the situation. The distinction between direct reaction and reaction through symbols is not very clear. I’ll develop this theme in the next post.)

Recursive PaintingIf thinking is symbol-processing, then thinking about thinking consists of constructing a second level of symbols about the first level, and manipulating those.

Humans can think about thinking about thinking about thinking, and so on, to an arbitrary depth.

Could a computer program do the same thing? Yes. Easily. In a future post I’ll explain one way to do it. But first we must ask whether an intelligence must understand this reflection on its own symbols in order to be conscious. That, too, will be considered in the next post.

Artificial Consciousness: Consciousness is a Matter of Degree

This is the second post in a series on artificial consciousness. For an introduction and a road map, see Artificial Consciousness: Introduction.

Let’s consider whatever informal definition of consciousness you happen to have.  Chances are, it centers on the concept of awareness of environment and of self. Is awareness a yes/no proposition, or is it a matter of degree?

Nap Time

Nap Time

If you have ever awakened from a nap, or ever had too much to drink, you know that awareness of one’s environment is a matter of degree. Case closed.

Awareness of self is trickier. At first blush, it seems that either one is aware of oneself or one isn’t. But consider the tragic situation of dementia. I knew someone (now deceased) who was sliding deeper into Alzheimer’s Disease. He was largely unaware of his condition. In one of his lucid moments, though, his wife said it was time for him to move out of their home so he could have better full-time care. “Am I as bad as that?” he said, quite upset. He had had only a vague awareness of his condition.

Blastocyst

Human Blastocyst

Consider a developing human, from the moment of conception onward. Surely he or she is not self-aware when he or she is a single cell. That is not to say that a single cell is unresponsive to its environment, but we’re talking about self-awareness.  Does self-awareness suddenly pop in at x weeks after conception? Isn’t it more likely that it develops gradually, just as the brain itself does?

In short, it seems obvious that awareness of self and awareness of environment can scale gradually down to zero.  I’ll venture that the same is true of the other aspects of your own definition of consciousness. If you disagree, please leave a comment on this post, and let’s talk.

Next time, we will think about which attributes of consciousness are the essential ones.