class: center, middle # Creación de Videojuegos ### AI Behavior --- # Game AI Reminder: The first rule of game AI: ### It does not matter how smart/dumb your AI is, as long as it *plays well* --- # Ad-hoc behavior * Let's start simple: We have a bandit that's walking back and forth * Our player comes near ```C if (distance(player, bandit) < x) { attack(bandit, player); } ``` --- # Ad-hoc behavior * We play with this a bit, and realize that the player can now bait ("kite") the bandit to town, where the friendly NPCs can kill him without a problem * Let's solve that! ```C if (distance(bandit, bandit_camp) > y) { walk_to(bandit, bandit_camp); } else if (distance(player, bandit) < x) { attack(bandit, player) }> ``` --- # Ad-hoc behavior * Now the player can bait the bandit to the edge of his follow radius * What if the player walks back and forth on that line? * We could add another condition for that ... * Note: This does not mean that ad-hoc coding of behavior is not bad, and often works well --- class: center, middle # AI ## What is AI? --- # What is AI * Just like with "game", no one agrees what "AI" really is * Commonly, "the science of intelligent agents" * Or, as Douglas R. Hofstadter said "AI is whatever hasn't been done yet" * Pathfinding "was" once AI * For games: Whatever the opponents do --- class: small # What is an agent * We often talk of "AI agents" * Each "intelligent" character in your game is an "agent" * If you play "honestly", each agent only "knows" what a player would know if they could play as that character * For example: If you have a bandit, it can't see the player if the player is behind an obstacle * The agent uses their knowledge to make determine which action to use * Note: It's not mandatory to play "honestly" for the AI agents --- class: middle, center # Some AI techniques ## (that are commonly used in games) --- # Decision Trees * Remember the series of if statements from earlier? * We can represent them as a tree * Each branch is one if statement, based on some condition * The leaves of the trees are actions the character takes --- # Decision Trees
--- # Decision Trees: Limitations * We haven't actually changed anything from the if statements (other than drawing them) * Designing a decision tree is still a lot of manual work * There's also no persistence, the agent will decide a new behavior every time the tree is evaluated * There is one nice thing: Decision trees can (sometimes) be learned with Machine Learning techniques --- # Finite State Machines * We can make our code nicer if we separate decisions and behavior * For example: Wandering around is one behavior, attacking the player another, etc. * Each of these behaviors is a "state" of the bandit * The bandit decides when to change states depending on the game state --- # Finite State Machines
--- class: small # Finite State Machines: Limitations * There's no real concept of "time", it has to be "added" * If you just want to add one state you have to determine how it relates to every other state * If you have two Finite State Machines they are hard to compose * It's also kind of hard to reuse subparts * For example: A guard has a routine consisting of multiple states to chase an intruder, the bandit could use that same routine, but it would connect differently to the rest of its behavior --- # Hierarchical Finite State Machines * Finite State Machines define the behavior of the agent * But we said the nodes are behaviors?! * We can make each node another sub-machine! * This leads to *some* reusability, and eases authoring --- class: small # Behavior Trees * Let's still use a graph, but make it a tree! * If we have a subtree, we now only need to worry about one connection: its parent * The *leaves* of the tree will be the actual actions, while the interior nodes define the decisions * Each node can either be successful or not, which is what the interior nodes use for the decisions * We can have different kinds of nodes for different kinds of decisions * This is extensible (new kinds of nodes), easily configurable (just attach different nodes together to make tree) and reusable (subtrees can be used multiple times) --- # Behavior Trees: Common Node types * Choice: Execute the first child that is successful * Sequence: Execute all children until one fails * Loop: Keep executing child (or children) until one fails * Random choice: Execute one of the children at random * Timing: Execute first child for x seconds, then the second child, etc. --- # Behavior Trees
--- # Let's make a Behavior Tree for a Bandit! * She can walk somewhere * She can see the player * She can sound the alarm * She can hear the alarm of the bandit camp * She can attack the player --- # Planning * What if we want the agent to decide on a strategy? * So far we have only looked at how we can assemble behaviors statically * Idea: Give the agent a goal and let it figure out how to reach that goal * Of course the agent also has to know what it can do, which actions it can perform --- # Planning The planning problem: Given the current game state, a list of actions, and a goal, find a sequence of actions that leads to the goal -- Note: This is basically pathfinding! And what do we use for pathfinding? A*! --- # Planning * The current game state is our start vertex * Any vertex in which the goal is satisfied is our goal * Each vertex has one edge for each action that can be taken in it * To use A* you'll have to define a heuristic --- # Planning: Example
--- # Adversaries? * Planning doesn't really help if another character, or the player, can interfere? * How do we account for this? * For example, how can we play chess, if the opponent makes moves that affect our plans? * Adversarial search! --- class: small # Minimax! * Let's say we want to get the highest possible score * Then our opponent wants us to get the lowest possible score * For each of our potential actions, we look at each of the opponents possible actions * The opponent will pick the action that gives us the lowest score, and we will pick from our actions the one where the opponent's choice gives us the highest score * How does the opponent decide what to pick? The same way! --- # Minimax
image/svg+xml
0
1
2
3
4
+∞
10
5
-10
7
5
-5
-7
-∞
10
5
-10
-7
10
-10
-∞
-7
-7
-10
5
5
-7
--- # Minimax
image/svg+xml
0
1
2
3
4
+∞
10
5
-10
7
5
-5
-7
-∞
10
5
-10
-7
10
-10
-∞
-7
-7
-10
5
5
-7
--- # Minimax Let's draw the tree for Tic Tac Toe --- # Alpha-beta Pruning
image/svg+xml
6
8
9
5
7
9
6
6
3
2
4
7
6
5
6
8
5
7
6
6
3
4
5
8
5
7
6
3
5
5
6
3
6
MAX
MIN
MAX
MIN
MAX
--- class: small # Alpha-beta pruning * For the max player: Remember the minimum score they will reach in nodes that were already evaluated (alpha) * For the min player: Remember the maximum score they will reach in nodes that were already evaluated (beta) * If beta is less than alpha, stop evaluating the subtree * Example: If the max player can reach 5 points by choosing the left subtree, and the min player finds an action in the right subtree that results in 4 points, they can stop searching. * If the right subtree was reached, the min player could choose the action that results in 4 points, therefore the max player will never choose the right subtree, because they can get 5 points in the left one --- class: small # Minimax: Limitations * A tree for Tic Tac Toe is large to draw * Imagine one for chess * Even with Alpha-Beta pruning it's impossible to evaluate all nodes * Use a guess! For example: Board value after 3 turns * What about unknown information (like a deck that is shuffled)? --- class: small # References * [Finite State Machines for Game AI (ES)](https://gamedevelopment.tutsplus.com/es/tutorials/finite-state-machines-theory-and-implementation--gamedev-11867) * [Halo 2 AI using Behavior Trees](http://www.gamasutra.com/view/feature/130663/gdc_2005_proceeding_handling_.php) * [Sliding puzzle using A*](https://blog.goodaudience.com/solving-8-puzzle-using-a-algorithm-7b509c331288) * [Building a Chess AI with Minimax](https://medium.freecodecamp.org/simple-chess-ai-step-by-step-1d55a9266977)