class: center, middle # AI in Digital Entertainment ### Reinforcement Learning --- class: center, middle # Reinforcement Learning --- class: medium # The Problem * We want an agent that "figures out" how to "perform well" * Given an environment in which the agent perform actions, we tell the agent which reward they get * We generally assume that our states are discrete, and actions transition between them * The goal of the agent is to learn which actions are "good" to perform in which state * Rewards may be negative, representing "punishment" --- # The Problem
--- # Reinforcement Learning * The idea is that the agent performs several trial runs in the environment to determine a good policy * Compare to how a human player might learn how to play a game: Try some actions, observe the result * Games are a natural fit for this kind of learning, but there are other applications as well! --- # An Example
--- # An Example
--- class: medium # Reinforcement Learning: Notation * `s` is a state * `a` is an action * `T(s,a)` is the *state transition function*, it tells us which state we reach when we use action `a` in state `s` * `R(s)` is the (immediate) *reward* for a particular state * `V(s)` is the (utility) value of a state --- # Policies * `\(\pi(s)\)` is a *policy* * A policy tells the agent which action it should take in each state * The goal of learning is to learn a "good" policy * The *optimal policy* is usually written as `\(\pi^*(s)\)` --- # A Policy
--- # The Bellman Equation * `V(s)` is the (utility) value of a state * This utility value depends on the reward of the state, as well as all future rewards the agent will receive * However, the future rewards depend on what the agent will do, i.e. what the policy says $$ V^\pi(s) = R(s) + \gamma V(T(s, \pi(s))) $$ For the optimal policy: $$ V(s) = R(s) + \max_a \gamma V(T(s, a)) $$ --- # (Partial) Value Function
--- # Learning * The problem is: We don't generally know `V`, because we don't know `T(s,a)` (or `R(s)`) a priori * What can we do? * In our learning runs (episodes), we record the rewards we get, and use them to find an estimate of `V` * But we would also need to learn which state each action takes us to, in order to determine which action we should take --- # The Q function * Instead of learning `V` directly, we define a new function `Q(s,a)`, that satisfies `\(V(s) = \max_a Q(s,a)\)` $$ Q(s,a) = R(s) + \max_{a'} \gamma Q(T(s,a),a') $$ * Now we learn the value of `Q` for "each" pair of state and action * The agent's policy is then `\(\pi(s) = \text{argmax}_a Q(s,a)\)` * How do we learn `Q`? --- # Q-Learning * We store Q as a table, with one row per state and one column per action * We then initialize the table "somehow" * During each training episode, we update the table when we are in a state s and perform the action a:
$$ Q(s,a) \leftarrow Q(s,a) + \alpha (R(s) + \gamma \max_{a'} Q(T(s,a), a') - Q(s,a)) $$
--- # Q-Learning: Training * How do we train this agent? * We could just pick the action with the highest Q-value in our table * But then the initial values of the table would guide the exploration * Instead, we use an exploration policy * This could be random, or `\(\varepsilon\)`-greedy --- # Q-Table
--- class: small # SARSA * Q-Learning is very flexible, because it can use any policy to explore and construct the Q-table (off-policy learning) * However, when we already have a somewhat reasonable policy, it may be faster to use the *actual actions* the agent takes to update the Q-values (on-policy learning) * This approach is called SARSA: State-action-reward-state-action SARSA:
$$ Q(s,a) \leftarrow Q(s,a) + \alpha (R(s) + \gamma Q(T(s,a), a') - Q(s,a)) $$
Compare with Q-Learning:
$$ Q(s,a) \leftarrow Q(s,a) + \alpha (R(s) + \gamma \max_{a'} Q(T(s,a), a') - Q(s,a)) $$
--- class: small # Policy Search * Finally, instead of learning the values of `Q`, we could just directly learn a good policy * For example, start with an initial policy and tweak it until a good result is obtained * The idea is to use a parameterized representation of `\(\pi\)` that has fewer parameters than there are states * The problem is that a policy is usually a discontinuous function (if we change one parameter a little bit, we get a completely different action), so we can't use the gradient to find optima * Solution: Use a stochastic policy, which has probabilities for each action to be selected, and change these probabilities continuously --- class: mmedium # Markov Decision Processes * So far, we have assumed actions will take us from one state to another deterministically * However, in many environments, transitions are non-deterministic * Fortunately, we don't have to change much: Instead of a transition function `T(s,a)` we have transition probabilities `\(P(s' | s, a)\)` * Wherever we used `T(s,a)` we now use the expected value over all possible successor states * Note that with Q-Learning we did not have to learn `T(s,a)`, and we also do **not** have to learn `\(P(s' | s, a)\)` (model-free algorithm) --- class: small # Deep Q-Learning * One major limitation of Q-Learning is that the Q-table can become quite large, and hard to learn completely * However, the Q-table, and the policy are just functions, and we could approximate them * What do we use for function approximation? Neural Networks * There are a variety of approaches for this, including Deep Q Networks (DQN), Dueling Deep Q Networks (DDQN), etc. * Most of these advances are made to overcome problems with going from the discontinuous world of Reinforcement Learning to the continuous world of ANNs --- # Deep Q-Learning: Example
--- class: center, middle # Some Applications --- # Applications * Games are a natural fit for Reinforcement Learning * Indeed, arguably the first ever AI agent was a Reinforcement Learning agent for checkers in 1959 * The Backgammon agent TD-Gammon was another early success able to beat top players in 1992 * Another popular application is robot control, such as for an inverted pendulum, autonomous cars, helicopters, etc. --- # Games * Games, particularly classic arcade/Atari/NES games, have a become a standard testing-environment for Reinforcement Learning * Often, the state is given to the agent in terms of the raw pixels of the screen * Reinforcement Learning agents have become *very* good at playing these games * Sometimes they find exploits! --- # Exploiting Coast Runners
--- # Q*Bert
--- # Inverted Pendulum
Cart pendulum
y
x
M
F
➝
θ
l
m
--- # Stanford Autonomous Helicopter
--- # Circuit Design
--- class: small # Self-Driving Cars * Reinforcement Learning is usually an integral part of self-driving cars * What do we use as the reward function? - Get the passengers to the destination - Don't kill/injure anyone * What if an accident is unavoidable? * One scenario: Crash the car into a motorcycle driver with helmet, vs. one without a helmet * The driver with helmet would probably be injured less, so our agent would get a less negative reward * But now we are rewarding people for driving without helmets ... --- # Moral Machine
--- # Moral Machine: Results
--- class: mmedium # General Video Game AI * A current trend in Reinforcement Learning is to build agents that can learn to play *any* game * The Video Game Description Language (VGDL) is used to describe games and levels * Researchers design an agent, and then the agent is given a number of different games, and has to learn how to play them all well * There is even a GVGAI competition, where participants get some games to build and train their agents, then get the competition games a few weeks before the competition to fine-tune, and then play unknown levels of these games at the competition --- class: center, middle # Inverse Reinforcement Learning --- # Inverse Reinforcement Learning * As we have seen, defining a good reward function can be tricky * But what if we observe an expert playing a game and try to learn what they are working towards * One approach to this is called Inverse Reinforcement Learning, where we try to learn the reward function from observations of the states and actions --- # Inverse Reinforcement Learning
--- # Inverse Reinforcement Learning: How? * Function approximation of the reward function (using neural networks, radial basis functions, etc.) * Problem: Multiple reward functions can equally well explain the expert's behavior - Measure entropy of the different solutions and use that to decide - Use prior and/or posterior probabilities of trajectories for a given reward function * IRL is still a relatively new field of research --- # Next Week * [Improving Generalization Ability in a Puzzle Game Using Reinforcement Learning](http://www.cig2017.com/wp-content/uploads/2017/08/paper_71.pdf) --- class: ssmall # References * Chapter 21 of *AI: A Modern Approach*, by Russel and Norvig * [A Painless Q-Learning Tutorial](http://mnemstudio.org/path-finding-q-learning-tutorial.htm) * [Fault Reward Functions in the Wild](https://openai.com/blog/faulty-reward-functions/) * [Exploiting Q*Bert](https://www.theverge.com/tldr/2018/2/28/17062338/ai-agent-atari-q-bert-cracked-bug-cheat) * [Deep Reinforcement Learning Doesn't Work Yet](https://www.alexirpan.com/2018/02/14/rl-hard.html) * [The Moral Machine Experiment](https://www.nature.com/articles/s41586-018-0637-6) * [Deep Reinforcement Learning for General Video Game AI](https://arxiv.org/pdf/1806.02448.pdf) * [GVGAI Gym](http://www.gvgai.net/) * [A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress](https://arxiv.org/pdf/1806.06877.pdf)