In the previous post of this series, I argued that artificial consciousness is a matter of degree. Now we will begin to consider: degree of what?
This is a blog, not a book, so I will be content with the informal definition of consciousness that most people have: awareness of the environment and especially awareness of self. By awareness, we mean something more than the ability to think. What we have in mind is thinking about the fact that we’re thinking. This is the difference between a problem-solving sort of intelligence and consciousness, is it not?
A common chess-playing computer is an excellent problem-solver. As such, it exhibits its own specialized intelligence. However, we don’t consider it to be conscious, even in its specialty. That’s because it doesn’t think about what it’s doing, no matter how well it does it.
Could a machine think about thinking, and thus possess rudimentary self-awareness? I suggest that it could. To see why, let’s take a step back and ask what distinguishes thought from unthinking reflex.
Keeping this informal, rational thought consists of “mulling things over.” In order to gain distance from the raw, sensory input, we construct symbols that we can mentally arrange and rearrange. We are so used to doing this that we don’t even think of it as symbol-manipulation. Yet whenever we use language as an aid to thought, we are using symbols, even if we’re not using language out loud. And whenever we picture something in our minds, that picture is a symbol (it is certainly not the real thing).
We might think of an unthinking reflex as a direct reaction to stimulus without any intermediary symbols, and thought as the introduction of a symbol-processing phase in between stimulus and response. (This simplifies the situation. The distinction between direct reaction and reaction through symbols is not very clear. I’ll develop this theme in the next post.)
If thinking is symbol-processing, then thinking about thinking consists of constructing a second level of symbols about the first level, and manipulating those.
Humans can think about thinking about thinking about thinking, and so on, to an arbitrary depth.
Could a computer program do the same thing? Yes. Easily. In a future post I’ll explain one way to do it. But first we must ask whether an intelligence must understand this reflection on its own symbols in order to be conscious. That, too, will be considered in the next post.