AI : Intelligence and Beyond
: : Problems of AI : :
It is essentially a scientific question to understand the nature of the human mind. Neuroscience is a veritable mine of information to understand the complexity and nature and working of human brain. Recent advances with the use of nuclear magnetic resonance scanners have enabled researchers to study small parts of the brain whilst subjects solve problems. However, we must not underestimate the contribution of 200 million years of evolution in the development of the brain. It is highly probable that evolution over a long period of time has resulted in a brain that is so complex that it may take a very long time to understand its structure.
However, to understand the human mind it will not be sufficient to know the complete map of the brain nerves and various connections, just as understanding the full circuit diagram of a microcomputer will not help you to understand much of how it runs an application program. But there has also been progress in ‘Cognitive Science’ in building computational models of human tasks, and in time these models will cover a wider range of human experience. Furthermore, eventually the cognitive science models will relate human behavior back to our experience and to appropriate circuits in the brain. Clearly, to understand the mind there will have to be progress in philosophy as well as other fields, but again there has been a lot of progress in the last few years, and increasing interest in the philosophy of mind. Once we understand the nature of human mind it might become possible to build artificial minds similar or identical to human minds.
The difficulty of "scaling up" AI's so far relatively modest achievements cannot be overstated. Five decades of research in symbolic AI has failed to produce any firm evidence that a symbol-system can manifest human levels of general intelligence. Critics of nouvelle AI regard as mystical the view that high-level behaviors involving language understanding, planning, and reasoning will somehow "emerge" from the interaction of basic behaviors like obstacle avoidance, gaze control and object manipulation. Connectionists have been unable to construct working models of the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons, whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic the worm's simple nervous system. The "neurons" of connectionist theory are gross oversimplifications of the real thing.
The risks in developing artificial intelligence include the risk of failure to give it the ultimate goal of philanthropy. One way in which this could happen is that the creators of the artificial intelligence decide to build it so that it is partial towards certain people, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers makes a big mistake in designing its goal system. In a nutshell, it could result in a artificial intelligence realizing a state of affairs that we might now judge as profitable but which in fact turns out to be a false utopia in the long run as perceived by the intelligent computer, in which things essential to human flourishing have been irreversibly lost.
One consideration that should be taken into account when deciding whether
to promote the development of artificial intelligence is that if artificial
intelligence is feasible, it will have be developed one day or the other.
Therefore, we will probably one day have to take the gamble of artificial
intelligence no matter what. But once in existence, artificial intelligence
could help us reduce or eliminate other existing risks, such as the
risk that advanced nanotechnology will be used by humans in warfare
or terrorism, a serious threat to the long-term survival of intelligent
life on earth.