The military and the science of computers has always been incredibly closely tied - in fact, the early development of computing was virtually exclusively limited to military purposes. The very first operational use of a computer was the gun director used in the second world war to aid ground gunners to predict the path of a plane given its radar data. Famous names in AI, such as Alan Turing, were scientists that were heavily involved in the military. Turing, recognized as one of founders of both contempory computer science and artificial intelligence, was the scientist who broke the German's Enigma code through the use of computers.
As computing power increased and pragmatic programming languages were developed, more complicated algorithms and simulations could be realized. For instance, computers were soon utilized to simulate nuclear escalations and wars or how arms races would be affected by various parameters. The simulations grew powerful enough that the results of many of these 'wargames' became classified material, and the 'holes' that were exposed were integrated into national policies.
Artificial Intelligence applications in the West started to become extensively researched when the Japanese announced in 1981 that they were going to build a 5th Generation computer, capable of logic deduction and other such capabilities.
Inevitably, the 5th Generation project failed, due to the inherent problems that AI is faced with. Nevertheless, research still continued around the globe to integrate more 'intelligent' computer systems into the battlefield. Emphatic generals foresaw battle by hordes of entirely autonomous buggies and aerial vehicles, robots that would have multiple goals and whose mission may last for months, driving deep into enemy territory. The problems in developing such systems are obvious - the lack of functional machine vision systems has lead to problems with object avoidance, friend/foe recognition, target acquisition and much more. Problems also occur trying to get the robot to adapt to its surroundings, the terrain, and other environmental aspects.
Nowadays, developers seem to be concentrating on smaller goals, such as voice recogition systems, expert systems and advisory systems. The main military value of such projects is to reduce the workload on a pilot. Modern pilots work in incredibly complex electronic environments - receiving information not only from their own radar, but from many others (principle behind J-STARS). Not only is the information load high, the multi-role aircraft of the 21st century have highly complex avonics, navigation, communications and weapon systems. All this must be organized in a highly accessible way. Through voice-recognition, systems could be checked, modified and altered without the pilot looking down into the cockpit. Expert/advisory systems could predict what the pilot would want in a given scenario and decrease the complexity of a given task automatically.
Aside from research in this area, various paradigms in AI have been successfully applied in the military field. For example, using an EA (evolutionary algorithm) to evolve algorithms to detect targets given radar/FLIR data, or neural networks differentiating between mines and rocks given sonar data in a submarine. I will look into these two examples in depth below.
Genetic ProgrammingGenetic programming is an excellent way of evolving algorithms that will map data to a given result when no set formula is known. Mathmaticians/programmers could normally find algorithms to deal with a problem with 5 or so variables, but when the problem increases to 10, 20, 50 variables the problem becomes close to impossible to solve. Briefly, how an GP-powered program works is that a series of randomly generated expression trees are generated that represent various formulas. These trees are then tested against the data, poor ones discarded, good ones kept and breed. Mutation, crossover, and all of the elements in genetic algorithms are used to breed the 'highest-fitness' tree for the given problem. At best, this will perfectly match the variables to the answer, other times it will generate an answer very close to the wanted answer. (For a more in-depth look at GP, read the case study)
A notable example of such a program is SDI's e evolutionary algorithm designed by Steve Smith. e has been used by SDI to research algorithms to use in radars in modern helicopters such as the AH-64D Longbow Apache and RAH-66 Comanche. e is presented with a mass of numbers generated by a radar and perhaps a low-resolution television camera, or FLIR (Forward-looking Infra-red) device. The program then attempts to find (through various evolutionary means) an algorithm to determine the type of vehicle, or to differentiate between a actual target and mere "noisy" data.
Basically, the EA is fed with a list of 42 different variables collected from the two sensors, and then a truth value specifying whether the test data was clutter or a target. The EA then generates a series of expression trees (much more complicated than those normally used in GP programs). When new a best program is discovered, the EA uses a hill-climbing technique to get the best possible result out of the new tree. Then, the tree is subjected to a heuristic search to optimize the tree.
Once the best possible tree is found, e will output the program as either pseudocode, C, Fortran or Basic.
Once the EA had evolved the training data, it was put to work on some test data. The results were quite impressive:
While the algorithms performed well on the training data, the performance increased a lot when applied to the test data. Nevertheless, the fused detection algorithm (using both radar and FLIR information) still provided a decent error percentage.
An additional plus to this technique is that the EA could be actually programmed into the weapon systems (not just the algorithm outputted), so that the system could dynamically adapt to the terrain, and other mission-specific parameters.
Neural-networksNeural networks (NN) are another excellent technique of mapping numbers to results. Unlike the EA, though, they will only output certain results. A NN is normally pre-trained with a set of input vectors and a 'teacher' to tell them what the output should be for the given input. A NN can then adapt to a series of patterns. Thus, when feed with information after being trained, the NN will output the result whose trained input most closely resembles the input being tested.
This was the method that some scientists took to identify sonar sounds. Their goal was to train a network to differentiate between rocks and mines - a notoriously difficult task for human sonar operators to accomplish.
The network architecture was quite simple, it had 60 inputs, one hidden layer with 1-24 inputs, and two output units. The output would be <0,1> for a rock and <1,0> for a mine. The large amount of input units was to encorporate 60 normalized energy levels of frequency bands in the sonar echo. What this means is that a sonar echo would be detected, and subsequently fed into a frequency analyzer, that would break down the echo into 60 frequency bands. The various energy levels of these bands was measured, and converted into a number between 0 and 1.
A few simple training method was used (gradient-descent), as the network was fed examples of mine echoes and rock echoes. After the network had made its classifications, it was then told whether it was correct or not. Soon, the network could differentiate as good or better than its equivalent human operator.
The network had also beaten standard data classification techniques. Data classification programs could successfully detect mines 50% of the time by using parameters such as the frequency bandwidth, onset time, and rate of decay of the signals. Unfortunately, the remaining 50% of sonar echoes do not always follow the rather strict heuristics that the data classification used. The networks power came in its ability to focus on the more subtle traits of the signal, and use them to differentiate.
Morality: A Quick Thought
Autonomous weapons are a revolution in warfare in that they will be the first machines given the responsibility for killing human beings without human direction or supervision. To make this more accurate, these weapons will be the first killing machines that are actually predatory, that are designed to hunt human beings and destroy them.
The applications of AI in the military are wide and varied, yet due to the robustness, reliability, and durability required for most military programs and hardware, AI is not yet an intregral part of the battlefield. As techniques are refined and improved, more and more AI applications will filter into the war scene - after all, silicon is cheaper than a human life.