Tuesday, March 28, 2006

AI: The Lowdown

Via Futurismic:
...The development of game-playing programs reached a high point in 1997 with the defeat by a computer system from IBM called Deep Blue of chess grand master Gary Kasparov. Eliza was developed and refined, and in 1995 Richard Wallace developed Alice, a program that is now the world's most successful chatbot. Indeed such AI programs have reached a level of sophistication that allows them to be routinely used in interactive Web sites and automated telephone services by many companies, including Coca Cola and Burger King. Meanwhile mobile robots directly descended from Shakey have successfully explored the surface of Mars.
Read the first part here, at ZDNET UK.

While this isn't news per se, it is a very good rundown and summary of the history of A.I. One thing that is left unmentioned is what some of the algorithms developed:

...This was of course the long-standing idea amongst AI researchers that there is a fundamental set of algorithms that if supplied with enough information will eventually produce an intelligent system.
In particular, Neural Networks were thought to hold the keys to the kingdom, or so to speak; they conceptually mimicked human intelligence the best being artificial 'neurons' (called perceptrons). The issue was, of course, that any number of them were still only able to solve linear problems only. Nowadays they are using multi-tiered neural nets. The link above shows an algorithm involving a hidden layer, which is capable of some non-linear computation.

The simplest example of the inability of a basic perceptron is XOR. with inputs 1, 1 you get 0; 0, 0 you get 0; but for 1,0 or 0,1 you get 1. If you see the basic learning algorithm produces a line; no line can be derived from these inputs. (Work it out if you wish.)

Another algorithm is the Genetic Algorithm -- (it is a type of evolutionary algorithm) basically, think of the way DNA works with natural selection. Basically it is more of a population of candidate solutions; the most fit (the best solution) is more likely to be spliced into the others; as an example. The requirement is that they be abstracted to a point in which this is possible. So the 'chromosome' could be a sequence of moves; etc. Mutation, crossing-over, and so forth may be activated to stochastically create new candidate solutions. A genetic algorithm is a very advanced greedy algorithm; one that due to its random nature is able to often escape local maxima (solutions that look the best compared to all others, but are not the best; better solutions are located somewhere else in the searchable area of solutions.) Genetic Algorithms are great if you want to search a gigantic area of possible solutions and don't know where to start.

The third algorithm I found compelling in A.I is the A* Search Algorithm

A* Search is basically another advanced greedy algorithm; but uses estimates and heuristics to insure that is does not finish with a locally optimal but not generally optimal path. In other words, when it finds a path that appears optimal, because it optimistically guesses, it will not assume that there is not a more optimal path. So given that its search area is closed, it will always find the most optimal solution.

A lot of its effectiveness depends on its heuristic h(x) which guesses the optimal path length. You also must be able to accurately assign weights to nodes/paths.

A.I. can be summed up like this: Define a search area. Search the area for the optimal solution. Repeat. So, when a robot 'perceives' it must abstract what it sees so that it can apply a search algorithm. Neural nets 'search' in a different way; they search for a set of weights that approximate the optimal solution closest (A la Newton's Method.) so it is actually, during training phase, doing a search.

How this applies to a real robot is simple; the robot must pre-decide what it is going to do based on current and prior perceptions; before each decision the robot runs searches on each decision or set of decisions to decide how well it meets the current and or long term goals. For this purpose any of the three algorithms above could be applied.

But note-- as good as these algorithms are, they are still artificial; nothing about them inherently mimicks human intelligence.

More recently, the goal of AI has been to make 'reasonable decisions'. What this means mainly is that we have decided that human intellegence whether possible or not, is not close at hand; and the most funding does not come from pie-in-the-sky theories. A good balance of theoretical development (abstract algorithms) and practical development (concrete, applications of algorithms) feeding each other will no doubt be the path to wherever Artificial Intelligence is headed.

What we may end up with in the end will be a different type of intellegence than our own. I have few doubts that it will complement our own. Just as we are a creation of God; machines are a creation of man (and thus also a creation of God in effect.)

Ok, back to work, folks.


Post a Comment

Links to this post:

Create a Link

<< Home