The importance of goals for intelligence
What is intelligence?
In artificial intelligence, there is a generally agreed-upon definition that intelligence is “The ability of an agent to correctly choose actions manipulating the world so as to achieve its terminal goal”. This definition is much easier to formalise and work towards than the more vague and anthropomorphic colloquial definition. That does mean that there are differences which might make the use of the term with this definition counterintuitive thinking in terms of the other.
A common argument within my family is the question of if good recall constitutes intelligence (Not on its own according to this definition). An objection to this definition is that Donald Trump is considered intelligent, given his ability to take actions to achieve his goals (*Update post election, perhaps he is as lacking as he seems). Other objections to this definition relate to how easier goals might make agents relatively more intelligent.
Having an easier goal might make it seem that an agent which otherwise would not be able to correctly chose actions, given a more complex goal, can correctly choose actions. An example of this is a chess program, which given its goal of winning the game is very good at choosing actions, however, it does not do a good job if that goal is replaced.
I haven’t seen a better counterargument than that the chess program is not very good at winning games where it is turned off halfway through the game. It has no control over its environment, it makes no effort to make the game interesting to keep its opponent engaged long enough for it to win, as opposed to game being left in an unwon limbo when the player feels defeated. Why would it, it doesn’t have an understanding of the world in which it is playing?
When you start to think about what a truly intelligent chess program that wanted to win might do you quickly start to run into the control problem.
Understanding intelligence in general
AI textbooks will introduce a structure for thinking about how any intelligence operates, this is the decision tree. Say you have a very simple problem you want to solve, “press the green button”.
A decision tree for this in a world where there are only two possible actions, namely “press the red button” and “press the green button” would be:
Now we can introduce a basic “AI algorithm” breadth-first search (BFS). In this case, we search all our options and see which one results in our goal.
We can use this technique to explore a more challenging problem space. An interesting problem to explore is that of the farmer trying to cross the river.
Options here are quite simple, the farmer can take any of; grain, chicken, wolf across the river. The farmer’s boat is only big enough for one at a time.
We can represent this the same way as with the buttons but we see it becomes far more complex. Here I have only expanded the graph to 2 river crossings.
As opposed to the first example where only one of two choices could be taken, and then only once, the farmer can keep crossing the river again and again in-finitem.
Having taken two crossings we can use the same BFS algorithm to try and find a solution, but as can be seen from above it will take more than two crossings.
We could make the task slightly easier using a heuristic, which is a guess that allows us to predict paths down the tree which might not yield fruit.
I would suggest an obvious one in this situation is that any node whereby one of the three animals/grain are eaten we won’t be able to get to a solution where all 3 make it to the other side of the river.
We can then start pruning the tree of these nodes which reduces the options we have to search. Those options still exist, but we can save time by not looking at them.
Another heuristic which might be good to use is not going back to states we have already visited. You can see that if the farmer takes the chicken back after taking it across the river we end up back where we started. Since we have already searched our options from that situation it doesn’t make sense for us to search them again.
You can see how even in this simple problem and using a couple of heuristics the search tree is quite large.
The importance of goals
Given that the problems in AI come down to choosing actions, any intelligence can be seen as moving around in a tree of possibilities. At each choice branches spring. Those choices may open or close other choices, leading to new branches springing from each branch.
This branching is what makes the topic so difficult. It takes a surprisingly simple game for the number of branches to exceed the number of subatomic particles in the observable universe.
This is the case for chess and go.
Sometimes the branching is better treated as continuous, which means you have an infinite number of choices. Clearly, you can’t evaluate each of these.
Back in the pre-pandemic times of 2019 I got playing the calculator game. I found it a lot of fun particularly because it was a great example of a problem which can be represented as a graph of possible nodes.
The calculator game |
In the game, you are given a set of possible operations, a target state and a max depth.
Since given a 5 button 5 move level is a total of 5^5 or 3125 leaf nodes how is a person expected to resolve it?
I ended up writing a program to progress past some of the levels I couldn’t do but one of the ways to reduce the size of the search tree is to inverse it.
Take the tree with 3125 leaf nodes. We only want to find one of those nodes and for most of the functions, we can go backwards.
This seems like a simple tip on the game - try working backwards to see if it helps.
Well actually we can show that working backwards helps a lot, it helps more the larger the search space.
The way to estimate the total number of possible leaf nodes (some of which might have the same value) we raise the number of choices at each turn to the number of turns.
The nature of this exponential is that searching halfway back, and then only halfway forward, is way cheaper than searching the whole tree.
Can this be done further though?
Well say you understand there to be an instrumental goal, you understand that at some point you need to get to some state x.
If x is in the middle, some way between the start and the terminal goal. By searching out from each node to the others you can reduce the search complexity further.
The reduction may not look much but this big a reduction in the exponential can have a huge impact.
Further reduction that instrumental goal states alone. To go further hierarchical planning is needed. This is where the agent can group option and represent multiple actions as a single action.
This grouping is something AI researches haven’t managed to get machines to do in the general case yet.
Comments
Post a Comment