Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter

Thursday, December 04, 2008

Thinking is harder than you know

Followup to How Difficult Is Artificial Intelligence?

(This is another heavily Yudkowsky-influenced post.)

We don't notice the massive amount of processing our brains are doing when we think. This led early AI researchers to be too optimistic. How hard could thinking be? I do it all the time. But introspecting the actual algorithms our brains are running is in many cases impossible. We don't have access to that information.

One example (of many!) of the processing our brains do that we're not aware of is processing the direction of an incoming sound. When you hear a sound, you sense the approximate direction the sound came from. How do you do that?

You might guess that it's because the sound is louder in one ear and quieter in the other. You're right, but it's not the whole story. From Natural Biodynamics by Vladimir G. Ivancevic and Tijana T. Ivancevic:
The auditory nerve carries the signal into the brainstem and synapses in the cochlear nucleus...The ventral cochlear nucleus cells then project to a collection of nuclei in the medulla called the superior olive. In the superior olive, the minute differences in the timing and loudness of the sound in each ear are compared, and from this you can determine the direction the sound came from. (Emphasis added.)

It is not obvious to me, just from considering what it feels like to hear sounds, that minute differences in the arrival of the sound to my left and right ears are important in determining the direction the sound came from. The lessons I draw from this are:
  • Introspecting what it feels like to think is probably not a good way to approach building an artificial mind.
  • Inspecting brain processing, as Lloyd Watt did to find out this information, is a process that yields results.
  • The brain is not an indecipherable black box; there's no obviously compelling reason that we won't make progress in figuring out how thinking actually works by examining it.

Inspecting the brain is one way we might discover what human-level intelligence actually involves, but not the only way. Yudkowsky cautions against relying too much on emulating the brain when designing an artificial one:
Maybe neurons are just what brains happen to be made out of, because the blind idiot god is too stupid to sit down and invent transistors. All the modules get made out of neurons because that's all there is, even if the cognitive work would be much better-suited to a 2GHz CPU.
"Early attempts to make flying machines often did things like attaching beak onto the front, or trying to make a wing which would flap like a bird's wing. (This extraordinary persistent idea is found in Leonardo's notebooks and in a textbook on airplane design published in 1911.) It is easy for us to smile at such naivete, but one should realize that it made good sense at the time. What birds did was incredible, and nobody really knew how they did it. It always seemed to involve feathers and flapping. Maybe the beak was critical for stability..." - Hayes and Ford, "Turing Test Considered Harmful"

So... why didn't the flapping-wing designs work? Birds flap wings and they fly. The flying machine flaps its wings. Why, oh why, doesn't it fly?

Le Bris' flying machine, photographed in 1868

Eventually someone stopped copying beaks and feathers and flapping and focused on understanding the actual problem, and then invented fixed-wing airplanes. That's an option for AI too. We might go too far in trying to copy nature's solution exemplified in human brains. Helicopters don't have wings (at least, not without a very forgiving interpretation of "wings") but they still fly. Maybe we should not be terribly surprised if an AI doesn't need to have anything recognizable as a neural network, or a soul, or quantum interference, or whatever else we might like to think a mind absolutely has to have.

Added: Unsurprisingly, Yudkowsky made my first point more eloquently:
After spending a decade or two living inside a mind, you might think you knew a bit about how minds work, right? That's what quite a few AGI wannabes (people who think they've got what it takes to program an Artificial General Intelligence) seem to have concluded. This, unfortunately, is wrong.

Artificial Intelligence is fundamentally about reducing the mental to the non-mental.

You might want to contemplate that sentence for a while. It's important.

Living inside a human mind doesn't teach you the art of reductionism, because nearly all of the work is carried out beneath your sight, by the opaque black boxes of the brain. So far beneath your sight that there is no introspective sense that the black box is there - no internal sensory event marking that the work has been delegated.

If you're interested in this kind of thing, you need to be reading Yudkowsky, not me, of course.

If you're interested in an overview of what kinds of processing occur in different parts of the brain, check out this interactive 3D graphic of the brain by Open Colleges.

0 Comments:

Post a Comment

<< Home