Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter

Saturday, November 08, 2008

How difficult is Artificial Intelligence?

(This post owes an enormous amount to the ideas of Eliezer Yudkowsky, an AI researcher who writes at Overcoming Bias. Reading this post might lead a non-Yudkowsky-reading person to a new perspective on AI, without needing to read the hundreds of thousands of words Yudkowsky has written on the subject. If it sounds profound, it's because of him. If it sounds dumb, it's my fault for not explaining it properly.)

How difficult is it to buid a human-equivalent (or better) AI? It's really difficult! Famously difficult! I've tried it myself.

But there's nothing inherently mysterious about intelligence, or consciousness, or sentience. There's nothing inherently mysterious about anything. Some things you know, some things you don't. Ask Eliezer Yudkowsky:
But ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.

Some people want to put the engineering project of AI on a pedestal and set it apart from all previous inventions. Surely consciousness involves some emergent complex quantum phenomenon! No-one has ever built strong artificial intelligence! No-one, that is, for a narrow definition of "no-one".

No human has, anyway. Evolution, the intelligence that Yudkowsky likes to call the blind idiot god, did build intelligence, little by little, over billions of years. How did evolution deal with the problem, for example, of squirrels needing to find nuts?
Gradually DNA acquired the ability to build protein computers, brains, that could learn small modular facets of reality like the location of nut trees. To call these brains "limited" implies that a speed limit was tacked onto a general learning device, which isn't what happened. It's just that the incremental successes of particular mutations tended to build out into domain-specific nut-tree-mapping programs. (If you know how to program, you can verify for yourself that it's easier to build a nut-tree-mapper than an Artificial General Intelligence.)

You can generate computer programs using evolutionary algorithms, and you can get some interesting and useful results. So maybe we should try to build intelligence with evolutionary algorithms? But evolution is horribly inefficient - it took billions of years and a planet for evolution to produce the first intelligence (though, to be fair, building intelligence wasn't evolution's goal per se). Human intelligence is much more efficient - that is, smarter - than the intelligence of evolution that created it.

Some reader might consider my labeling of evolution as intelligence to be a dubious manoeuvre. We need a definition of intelligence. Something is intelligent to the extent it can perform efficient cross-domain optimization:
A bee builds hives, and a beaver builds dams; but a bee doesn't build dams and a beaver doesn't build hives. A human, watching, thinks, "Oh, I see how to do it" and goes on to build a dam using a honeycomb structure for extra strength.
...
A large asteroid, falling on Earth, would make an impressive bang. But if we spot the asteroid, we can try to deflect it through any number of methods. With enough lead time, a can of black paint will do as well as a nuclear weapon. And the asteroid itself won't oppose us on our own level - won't try to think of a counterplan. It won't send out interceptors to block the nuclear weapon. It won't try to paint the opposite side of itself with more black paint, to keep its current trajectory. And if we stop that asteroid, the asteroid belt won't send another planet-killer in its place.

We might have to do some work to steer the future out of the unpleasant region it will go to if we do nothing, but the asteroid itself isn't steering the future in any meaningful sense. It's as simple as water flowing downhill, and if we nudge the asteroid off the path, it won't nudge itself back.

The tiger isn't quite like this. If you try to run, it will follow you. If you dodge, it will follow you. If you try to hide, it will spot you. If you climb a tree, it will wait beneath.

But if you come back with an armored tank - or maybe just a hunk of poisoned meat - the tiger is out of luck. You threw something at it that wasn't in the domain it was designed to learn about. The tiger can't do cross-domain optimization, so all you need to do is give it a little cross-domain nudge and it will spin off its course like a painted asteroid.

Evolution performs optimizations across numerous domains - it can arrange atoms into flying machines, snake venom, opposable thumbs and even protein computers that are pretty damn smart themselves. But considering the resources evolution had available, evolution could hardly be called efficient.

How does human ingenuity compare to evolution?
Yes, some evolutionary handiwork is impressive even by comparison to the best technology of Homo sapiens. But our Cambrian explosion only started, we only really began accumulating knowledge, around... what, four hundred years ago? In some ways, biology still excels over the best human technology: we can't build a self-replicating system the size of a butterfly. In other ways, human technology leaves biology in the dust. We got wheels, we got steel, we got guns, we got knives, we got pointy sticks; we got rockets, we got transistors, we got nuclear power plants. With every passing decade, that balance tips further.


So what does this say about the prospect of (re)inventing AI sometime soon? Nothing definitive, but it gives me hope. Remember, the problem of building intelligence has already been solved - and the one who solved it was a moron. Unfortunately he wrote in the most awful, inefficient, buggy, defect-prone spaghetti code a programmer could hope never to see, so bad that it's not clear whether it's best to try to understand what he wrote, or just rewrite it from scratch.

1 Comments:

Blogger Sebastian Bitticks said...

On the matter of inherent mystery and ignorance, and return to a discussion we had in good, old-fashioned analogue, there is territory that is a darker shade of unknown than others. Due to the fundamental limitations of our mind and tools of perception, some phenomenon are more visible than other. My assertion is that our capacity to understand is not, currently, boundless, and that for phenomenon that fall into the blind-spots of the human experience, whose traces perhaps tantalize from the periphery but cannot be turned to directly and faced, there is mystery.

Does the mystery come from us rather than as an inherent quality of the thing itself? My friend, EVERYTHING comes from us - the concept of inherence is a fallacy.

This is all just to say, not all phenomenon are equally capable of being understood, and I think coming to an understanding of what intelligence is and how it works might be of a kind that our human minds might be unable to fully grasp.

Also, in reference to cross-domain optimization, I'm still unconvinced intelligence, at least the intelligence of human-kind, is just a higher grade of problem solving. That's certainly a major part, but consider: such a framework can explain HOW we as intelligent beings do what we do, but very often not WHY. And, it seems, motivation is the heart of the AI issue.

11:01 PM

 

Post a Comment

<< Home