SMARTS 2.0
Studying Adaptation in an Artificial World

Author and Programmer: Michael Gasser

Learning in SMARTS

In the SMARTS Learning module, critters adapt to their environment using reinforcement learning, specifically an algorithm called Q-learning. This algorithm is described in the next section. Even if you are already familiar with Q-learning, you should read this section because some details are particular to SMARTS. The following section describes the program.

The Theory

Learning

All animals and automata exhibit some kind of behavior; they do something in response to the inputs that they receive from the environment they exist in. Some animals and automata change the way they behave over time; given the same input, they may respond differently later on than they did earlier on. Such agents learn. The field of machine learning studies learning algorithms, which specify how the changes in the learner's behavior depend on the inputs received and on feedback from the environment.

Learning algorithms fall into three groups with respect to the sort of feedback that the learner has access to. On the one extreme is supervised learning: for every input, the learner is provided with a target; that is, the environment tells the learner what its response should be. The learner then compares its actual response to the target and adjusts its internal memory in such a way that it is more likely to produce the appropriate response the next time it receives the same input. We can think of learning a simple categorization task as supervised learning. For example, if you're learning to recognize the sounds of different musical instruments, and you're told each time what the instrument is, you can compare your own response to the correct one.

On the other extreme is unsupervised learning: the learner receives no feedback from the world at all. Instead the learner's task is to re-represent the inputs in a more efficient way, as clusters or categories or using a reduced set of dimensions. Unsupervised learning is based on the similarities and differences among the input patterns. It does not result directly in differences in overt behavior because its "outputs" are really internal representations. But these representations can then be used by other parts of the system in a way that affects behavior. Unsupervised learning plays a role in perceptual systems. Visual input, for example, is initially too complex for the nervous system to deal with it directly, and one option is for the nervous system to first reduce the number of dimensions by extracting regularities such as tendencies for regions to be of constant color and for edges to be continuous.

A third alternative, much closer to supervised than unsupervised learning, is reinforcement learning: the learner receives feedback (sometimes) about the appropriateness of its response. For correct responses, reinforcement learning resembles supervised learning: in both cases, the learner receives information that what it did is appropriate. However, the two forms of learning differ significantly for errors, situations in which the learner's behavior is in some way inappropriate. In these situations, supervised learning lets the learner know exactly what it should have done, whereas reinforcement learning only says that the behavior was inappropriate and (usually) how inappropriate it was. In nature, reinforcement learning is much more common than supervised learning. It is rare that a teacher is available who can say what should have been done when a mistake is made, and even when such a teacher is available, it is rare that the learner can interpret the teacher's feedback to provide direct information about what needs to be changed in the learner, that is, features of the learner's nervous system. Consider an animal that has to learn some aspects of how to walk. It tries out various movements. Some work — it moves forward — and it is rewarded. Others fail — it stumbles or falls down — and it is punished with pain.

Thus while much of the focus of machine learning has been on supervised learning, if we are to understand learning in nature, we need to study unsupervised and reinforcement learning. In this class, we will explore one reinforcement learning algorithm in some detail.

Reinforcement learning

(For more on reinforcement learning, see this tutorial by M. E. Harmon and S. S. Harmon)

To simulate the learning of real biological systems, we need to make some simplifying assumptions about our agent and its behavior. (From now on, I'll refer to our learner as an "agent" to emphasize that it's actively doing things, not just learning what to do.) We will assume that the agent's life is a Markov decision process. That is,

For example, consider an agent in a one-dimensional world of cells. In each cell except those on the end, the agent can go either east or west. At the extreme west end lives a swarm of mosquitoes. At the extreme east, there is a mango tree. Anywhere else, the agent receives no information at all from the environment. The agent can sense which cell it is in, but at the beginning of learning, it has no idea what cell it gets to by going east or west and what sort of reinforcement it will receive, if any. On the basis of the punishment it receives if and when it reaches the west end and the reward it receives if and when it reaches the east end, it must learn how to behave anywhere in the world.

mosquitoes and mangoes

The goal of reinforcement learning is to figure out how to choose actions in response to states so that reinforcement is maximized. That is, the agent is learning a policy, a mapping from states to actions. We will divide the agent's policy into two components, how good the agent thinks an action is for a given state and how the agent uses this knowledge to choose an action for a given state.

There are several ways to implement the learning process. We will focus on just one, Q learning, a form of reinforcement learning in which the agent learns to assign values to state-action pairs. We need first to make a distinction between what is true of the world and what the agent thinks is true of the world. First let's consider what's true of the world. If an agent is in a particular state and takes a particular action, we are interested in any immediate reinforcement that's received because it's relevant for the agent's policy. But we (and the agent) are also interested in any future reinforcements that result from ending up in a new state where further actions can be taken, actions that follow a particular policy. For example, in the mosquito-mango world above if the agent is in the second cell from the far east, moving to the east results in no reinforcement, but it puts the agent in the position to get a positive reinforcement on the next action (given the right policy). Given a particular action in a particular state followed by behavior that follows a particular policy, the agent will receive a particular set of reinforcements. This is a fact about the world. In the simplest case, the Q-value for a state-action pair is the sum of all of these reinforcements, and the Q-value function is the function that maps from state-action pairs to values. But the sum of all future reinforcements may be infinite when there is no terminal state, and besides, we may want to weight the future less than the here-and-now, so instead a discounted cumulative reinforcement is normally used: future reinforcements are weighted by a value γ between 0 and 1 (see below for mathematical details). A higher value of γ means that the future matters more for the Q-value of a given action in a given state.

If the agent knew the Q-values of every state-action pair, it could use this information to select an action for each state. The problem is that the agent initially has no idea what the Q-values of any state-action pairs are. The agent's goal, then, is to settle on an optimal Q-value function, one that assigns the appropriate values for all state/action pairs. But Q-values depend on future reinforcements, as well as current ones. How can the agent learn Q-values when it only has access to immediate reinforcement? It learns using these two principles, which are the essence of reinforcement learning:

The second principle is the one that makes the reinforcement learning magic happen. It permits the agent to learn high or low values for particular actions from a particular state, even when there is no immediate reinforcement associated with those actions. For example, in our mosquito-mango world, the agent receives a reward when it reaches the east end from the cell just to the west of it. It now knows that that cell is a good one to go to because you can get rewarded in only one move from it.

What's left is the mathematical details. First, consider the optimal Q-value function, the one that represents what's true of the world.

Q* (st, at) = r (st, at) + γ maxat+1 Q* (st+1, at+1)

That is, the optimal Q-value for a particular action (at) in a particular state (st) is the sum of the reinforcement received when that action is taken and the discounted best Q-value for the state that is reached by taking that action (st+1). The agent would like to approach this value for each state-action pair. At any given time during learning, the agent stores a particular Q-value for each state-action pair. At the beginning of learning, this value is random or set at some default. Learning should move it closer to its optimal value. In order to do this, the agent repeatedly takes actions in particular states and notes the reinforcements that it receives. It then updates the stored Q-value for that state-action pair using the reinforcement received and the stored Q-values for the next state. Assuming the Q-values are stored in a lookup table, the agent could use this update equation:

Q (st, at) = r (st, at) + γ maxat+1 Q (st+1, at+1)

This sets the Q-value to be the sum of the reinforcement just received and the discounted best Q-value for the next state, that is, what the agent currently believes the Q-value for the next state to be (what is stored in its lookup table). But this is usually a bad idea because the information just received may be faulty for one reason or another. It is better to update more gradually, to use the new information to move in a particular direction, but not to make too strong a commitment. The following update equation reflects this strategy:

Q t+1 (st, at) = (1-η)Q t (st, at) + η[r (st, at) + maxat+1 Q t (st+1, at+1)]

There is a learning rate, η, which controls the learning step size, that is, how fast learning takes place. The new Q-value for the state and action is the weighted combination of the old Q-value for that state and action and what the new information would lead us to believe. Later we will see how a neural network can replace the lookup table for storing Q-values and how this has advantages when it comes to generalization.

An important aspect of reinforcement learning is the constraint that only Q-values for actions that are actually attempted in states that actually occurred are updated. Nothing is learned about an action that is not tried. The upshot of this is that the agent should try a range of actions, especially early on in learning, in order to have an idea what works and what doesn't. More precisely, on any given time step, the agent has a choice: it can pick the action with the highest Q-value for the state it is in (exploitation), or it can pick an action randomly (exploration). Note that there is a tradeoff between these two strategies. Exploitation, because it is based on what the agent knows about the world, is more likely to result in benefits to the agent. On the other hand, exploration is necessary so that actions that would not be tried otherwise can be learned about. We formalize the action decision process in terms of probabilities. The simplest possibility is to flip a coin and with a certain probability exploit your knowledge (pick the action with the highest Q-value) or explore (pick a random action). It also makes sense to have the probability of exploration depend on the length of the time the agent has been learning. Early in learning, it is better to explore because the knowledge the agent has gained so far is not very reliable and because a number of options may still need to be tried. Later in learning, exploitation makes more sense because, with experience, the agent can be more confident about what it knows. Here is one possible equation for the probability of selecting what the agent believes to be the best action:

Pexploit = 1 - e-E A

where E is a constant and A is the number of training samples so far (that is, roughly the "age" of the agent). As A increases, the probability rises, and higher values of E lead to greater rates of exploitation.

Here is what happens with two different values of E, 0.1 and 0.01.

exploit .1

exploit .01

A somewhat better approach is to have the probability of selecting an action also depend on its relative Q-value, not just on whether its value is highest. The higher an action's Q-value, the greater the probability of choosing it. But because these are still probabilities, there is still a chance of picking an action that currently has a low Q-value; that is, exploration is possible throughout learning and is more likely the closer the Q-values for the current state are. Here is one way of accomplishing this, the one that is used in SMARTS:

P(at) = e E A Q (st, at) / ∑u e E A Q (st, ut)

where the us represent all of the possible actions in state st.

The Program

SMARTS Home

Evolution module


Indiana University

CCSI

Michael Gasser

Licenses:
Documentation
Software