Instructor: Dr. Markus Eger
Email: markus.eger.ucr@gmail.com
Office hours: Tuesday, Friday, 2-2.55pm
Class: Friday, 5-9pm
Originally from Austria
BSc and MSc in Computer Science from University of Technology Graz, Austria
PhD in Computer Science from NC State University, USA
Hablo un poco de Español
Games: Smite, Guild Wars 2, Incremental Games
I also like board games (Dominion, Pandemic, Ricochet Robots, ...)
For my dissertation ("Intentional Agents for Doxastic Games") I worked on games involving communication
Hanabi: Cooperative card game with restricted communication
One Night Ultimate Werewolf: Unrestricted communication, featuring lies and deception
I also worked on narrative generation for a bit, particularly detective stories
The basis for my work were Dynamic Epistemic Logic, Planning, and Intentionality
Name
What do you work on?
Games
Fun facts?
Basic AI techniques
Planning, Search
Beliefs, Desires, and Intentions
Machine Learning: Neural Networks, Vector Models, Reinforcement Learning
Procedural Content Generation
I will (loosely) follow Artificial Intelligence and Games by Georgios N. Yannakakis and Julian Togelius
There is a free PDF available on the website
Another helpful reference for some parts is the Procedural Content Generation textbook by Noor Shaker, Julian Togelius, and Mark J. Nelson (also available as a free PDF from the website)
Not free, but highly recommended is also Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
Lectures introduce basic concepts
We will also read and discuss papers about particular applications for the concepts
Each student has to present one of these papers in class, and lead a discussion
You are all expected to read the papers before class to be able to discuss them in class
There will also be a project, in which you implement an AI technique of your choice
30%: Paper presentation
20%: Quizzes about the paper contents and discussion participation
50%: Project, consisting of:
If you are signed up for the lab: Your project counts as the lab
For the project you can choose one of four (five) options:
Implement an AI technique from a paper (published at CIG, AIIDE, or similar)
Implement an agent for a (popular) game
Implement/improve an agent for a competition held at a conference
Implement procedural content generation modules
Make a suggestion
Implement an AI technique from a paper
Published at a (game) AI conference or workshop, such as
You can also pick a paper and apply the technique it describes to another domain
I have some suggestions, but feel free to propose something else
Procedural Generation of Roads: Use A* and a custom cost function to determine where to place roads to connect two points in a 3D landscape
A semantic approach to patch-based procedural generationof urban road networks: Generate roads to build a city environment based on predefined parts ("patches")
Desire Path-Inspired Procedural Placement of Coins in a Platformer Game: Figure out where to place coins in a platformer game depending on where players are more likely to go.
Poetic sound similarity vectors using phonetic features: Convert words into vectors representing how they sound in order to be able to compute similarity, e.g. for computational poetry
Addressing the Elephant in the Room: Opinionated Virtual Characters: Build a social simulation in which virtual characters form and exchange opinions on topics.
Social Simulation for Social Justice: Build a social simulation with agents that discuss social issues, such as privilege and power dynamics
POMCP with Human Preferences in Settlers of Catan: Build AI agents for Hanabi that use Partially Observable Monte Carlo Planning.
ProcDefense — A Game Framework for Procedural Player Skill Training: Build a small game with several parameters to tweak difficulty that can dynamically adapt to the player's performance.
Generating Levels That Teach Mechanics: Generate levels for Super Mario that support the player in learning mechanics
Tarot-based narrative generation: Use Tarot cards to generate stories.
TaleSpin: An Interactive Program that Writes Stories: Build a system that uses character goals to tell fables.
These two papers are slightly different: You can use them as the basis for your own ideas.
Glaive: A State-Space Narrative Planner Supporting Intentionality and Conflict: Generate stories using a planner.
Exhaustive and Semi-Exhaustive Procedural Content Generation: Make something that makes something (like levels, buildings, trees, ...)
Some of the game AI conferences also host competitions (like CIG) using different games
For the project you may choose to build or improve an AI agent according to the competition rules
If the timing works out, I would also encourage you to submit them
It is completely OK to use a previous winner (if it is open source) as the basis and improve it
Make sure to report your improvements and their effect on the agent's performance!
Another option for the project is to implement AI agents for a game of your choice
I would suggest a game that has an open source implementation (e.g. Quake 3), or that is not too difficult to implement (e.g. a board game)
The project consists of implementing the AI agents, so if there is no implementation and you spend weeks building a nice user interface that's on you
To build the AI agents you should choose the appropriate techniques (we can discuss your options), and report how well they work
Make something that makes something!
Generominos can serve as guidelines/ideas
Implement several different inputs, transformers and blocks
Assemble into a pipeline
If multiple students want to work on this: agree on a common API for the modules (but everyone creates their own modules), so you can mix and match
29/3: Behavior Trees, FSMs, etc.
Handling Complexity in the Halo 2 AI, by Damian Isla, presented at GDC 2005
Explains how Behavior Trees were used in Halo 2 to make editable, composable AI behaviors.
Industry paper!
5/4: Planning
Three States and a Plan: The A.I. of F.E.A.R., by Jeff Orkin, presented at GDC 2006
The AI in F.E.A.R. uses a Finite State Machine with only 3 states, and relies on planning for more complex behaviors.
Also an industry paper!
12/4: Narrative
Fast and Diverse Narrative Planning through Novelty Pruning, by Rachelyn Farrel, Stephen G. Ware, presented at AIIDE 2016
One goal of content generation is to create a large set of different content. For narrative, measuring "different", and creating a large variety of different narratives is challenging. This paper presents one approach to diversify solutions based on planning.
26/4: Intentionality
An Intentional AI for Hanabi, by Markus Eger, Chris Martens, and Marcela Alfaro Córdoba, presented at CIG 2017
AI agents can play games involving communication, such as Hanabi, well with human players by utilizing intentionality.
Part of my dissertation work
3/5: Belief Modeling
Toward Characters Who Observe, Tell, Misremember, and Lie, by James Owen Ryan, Adam Summerville, Michael Mateas, and Noah Wardrip-Fruin, presented at EXAG 2015
Talk of the Town is a game in which knowledge gathering of non-player characters plays an important role. The paper describes how they simulate deception and forgetting in their virtual characters.
10/5: Game Trees and Uncertainty
Re-determinizing Information Set Monte Carlo Tree Search in Hanabi, by James Goodman, arXiv preprint 2019
Monte Carlo Tree Search is often used for games with hidden information (such as shuffled cards), and there are several extensions. This paper presents one such improvement with an application to Hanabi.
Description of the winner of the Hanabi competition at CIG 2018
17/5: Machine Learning
Build order optimization in Starcraft, by David Churchill and Michael Buro, presented at AIIDE 2011
Describes abstractions and heuristics that can be used to define and approach the build order optimization problem in StarCraft.
24/5: Neural Networks
Super Mario as a String: Platformer Level Generation Via LSTMs, by Adam Summerville and Michael Mateas, presented at DiGRA and FDG 2016
or
Mystical Tutor: A Magic: The Gathering Design Assistant via Denoising Sequence-to-Sequence Learning by Adam Summerville and Michael Mateas, presented at AIIDE 2016
Both use Long-Short Term Memory Recurrent Neural Networks to generate content, one levels for Super Mario, the other Magic cards
31/5: Vector Models
dAIrector: Automatic Story Beat Generation through Knowledge Synthesis by Markus Eger and Kory Mathewson, presented at INT 2018
We describe a system that can be used as a director in an improv theater setting, generating prompts for the actors and giving them hints on how to proceed with a scene, based on the already presented prompts.
I'm probably biased, but I think this is a really neat application of vector models.
7/6: Reinforcement Learning
Improving Generalization Ability in a Puzzle Game Using Reinforcement Learning by Hiroya Oonishi and Hitoshi Iima, presented CIG 2017
Shows work on a game called Geometry Friends, in which two different shapes have to collaborate to solve puzzles, and how reinforcement learning can be applied to the game to better solve unseen levels.
14/6: Clustering
A Recommender System for Hero Line-Ups in MOBA Games by Lucas Hanke and Luiz Chaimowicz, presented AIIDE 2017
Presents a system that suggests which heroes players should pick in DOTA2 to increase their chances of winning, based on which heroes have already been picked.
21/6: Grammars
Procedural Modeling of Interconnected Structures by Lars Krecklau and Leif Kobbelt, presented at Computer Graphics Forum 2011
As we will see, Grammars can be used to generate 3D structures. In this paper, the authors describe a method to add connections to such structures, such as supports for a rollercoaster that should be placed such that they don't intersect other parts of the geometry.
28/6: ASP/CSP
A Logical Approach to Building Dungeons: Answer Set Programming for Hierarchical Procedural Content Generation in Roguelike Games by Anthony J. Smith and Joanna J. Bryson, presented at the 50th Anniversary Convention of the AISB, 2014
Describes how to generate dungeons using Answer Set Programming to ensure that the generated levels have the desired properties.
Pick your three (or more) favorite papers and send me an ordered list by next Friday
I will do my best to accommodate your preferences, but in case of a tie I will give preference to whoever sent their requests first
However: Even if you send your email right now that does not mean you are guaranteed to get your top pick
It helps if you know the games the papers are talking about, but is generally not necessary
If you have a paper you think is a better fit for some topic (or if you find one while preparing your presentation), please let me know
If you ask 20 AI researchers what AI is, you will get 25 different definitions
Some include Machine Learning, some say ML and AI are distinct things
Commonly, "the science of intelligent agents"
Behavior that can be applied in unknown situations
Or, as Douglas R. Hofstadter said "AI is whatever hasn't been done yet"
"Everything in AI is either representation or search"
Just as with AI, there is no agreed upon definition of "game"
Commonly: There are goals and rules
Bernard Suits (Philosopher): "Playing a game is the voluntary effort to overcome unnecessary obstacles"
But there are certainly games that don't have goals?
(Minecraft)
(Bandersnatch)
The reason this class is not called "game AI" is because I want to make it clear that we're also talking about things that may or may not be games (depending on your definition)
Another example would be computer-generated stories, or improv theater
"Games", classical or experimental, are an important part of Digital Entertainment, though
In a (classical) game, AI can be used for many different purposes
NPC behaviors are usually AI driven (with various complexity)
You can generate content (names, trees, buildings, stories, mechanics, entire games, ...) using AI
If you want to know what your players are doing you can use AI techniques for player analytics
We will mostly cover NPC behavior and procedural content generation in this class
There are four ghosts: Blinky, Pinky, Inky, and Clyde
They chase Pacman
But they all have different "personalities"
How?
What are they actually doing?
Some disclaimers:
The ghosts actually have three modes: scatter, chase and frightened
In scatter mode each of them tries to get to his own corner
In frightened mode, they move randomly and can be eaten
Scatter and chase modes alternate temporally; frightened mode is triggered by eating a power-up
We will only be talking about chase mode
Each ghost has a current position/cell and a target position/cell
Ghosts can not turn around
Whenever a ghost comes to an intersection, it has to decide where to go
This decision is based on what their target is
Only Blinky is actually "chasing" Pacman
Pinky is trying to get to where Pacman "will be"
Inky does something weird, his target depends on Pacman's position and Blinky's position
Clyde tries to chase Pacman if he is far away, but if he gets too close he will try to go back to his own corner
How do the ghosts calculate the path to their target cell?
They don't!
Instead, before an intersection, they look at the next cell in each direction, and pick the direction that is closest to the target
What is the problem with this approach?
What is the problem with this approach?
The first rule of game AI (in the industry): It does not matter how smart/dumb your NPCs are, as long as the game plays well
We will discuss academic approaches to AI that may have their own goals
There are really two very different criteria to optimize for:
These two may not always align (imagine bots in a shooter that have perfect aim)
Making games requires a lot of effort
Apart from coding, there are also art assets, music, a story, etc. that need to be created
Some of these creation processes can be (partially) automated, or assisted: Procedural Content Generation (PCG)
Two problems
Procedural Content Generation is widely used in the industry
For example: Diablo 3 generates dungeons, weapons, even names for the enemies
Usually this is done by combining predefined patterns in a structured way
We will also discuss more experimental ways to generate content
(Keyforge)
As mentioned earlier, everything in AI is either representation or search
Logical formulas are often a very convenient representation
Why convenient?
We will use logic extensively!
I'm assuming you are all familiar with propositional logic?
a∧b
, a∨b
, ¬a
If we have some interpretation S of a and b, we can determine truth values
S⊨a∧b
iff a and b are both true in S
So what is S?
S is an interpretation, which assigns truth values to every constant
Sometimes it is convenient to represent S as a set that contains everything that is true
Conversely, everything that is not in the set is false
You may have heard the term model: An interpretation under which a formula is satisfied
A formula is a tautology if all interpretations are models
Representing any non-trivial state in propositional logic is tedious (imagine chess: there has to be one variable for each possible location of each piece)
We will use Predicate Logic instead, which represents the world as objects and relations between them
Objects: constants
Relations: Sets of n-tuples of objects, called predicates (n is the arity of the predicate)
(Functors: Assignments of n-tuples of objects to objects)
We also have sets of objects, and quantifiers ∀,∃
human(socrates)∀h:human(h)→mortal(h)friends(socrates,plato)
human(socrates)∀h:human(h)→mortal(h)friends(socrates,plato)human(h)
is an unary relation, i.e. a set of 1-tuples/single elements. It is often more convenient to write:
∀h∈human:mortal(h)
human(socrates)∀h:human(h)→mortal(h)friends(socrates,plato)human(h)
is an unary relation, i.e. a set of 1-tuples/single elements. It is often more convenient to write:
∀h∈human:mortal(h)Using set-operators, we could also write (socrates,plato)∈friends
In fact, that's how our interpretations work.
Technically, an interpretation has to assign values to all our constants, since Socrates and Plato could be the same person in an interpretation! We will typically use the unique-names assumption to avoid such problems.
Then, the interpretation has to contain definitions for every predicate, in the form of a set of tuples.
The closed world assumption states that everything we don't know to be true (i.e. everything not in the set representing a predicate) is in fact false.
Using these assumptions, we can represent our interpretation as a collection of sets, one for each predicate.
We often want to represent a world that is changing, rather than just static facts
Simple solution: Let's use constants representing (discrete) times, and increase the arity of each predicate by one to account for time
For example, human(socrates)
becomes human(socrates,0)
, or
∀t∈T:human(socrates,t)
An action is something that changes some truth values over time, often represented as an implication, like alive(socrates,400BCE)→¬alive(socrates,399BCE)
This can be generalized with quantifiers and special predicates that indicate when actions occur, if needed
Consider Fred, a turkey, and a gun
At time 0, Fred is alive and the gun is unloaded
We load the gun, so that at time 1 the gun is loaded
Then we wait
At time 2 we shoot, such that at time 3 Fred is dead
alive(Fred,0)¬loaded(Gun,0)¬loaded(Gun,0)→loaded(Gun,1)loaded(Gun,2)→¬alive(Fred,3)
alive(Fred,0)¬loaded(Gun,0)¬loaded(Gun,0)→loaded(Gun,1)loaded(Gun,2)→¬alive(Fred,3)
What about alive(Fred,1)
?
What about loaded(Gun,2)
?
Idea: Let's say "the least" number of things change each time step!
alive(Fred,0)¬loaded(Gun,0)¬loaded(Gun,0)→loaded(Gun,1)loaded(Gun,2)→¬alive(Fred,3)
What about alive(Fred,1)
?
What about loaded(Gun,2)
?
Idea: Let's say "the least" number of things change each time step!
The turkey could still die at time step 1 and stay dead ...
The Yale Shooting Problem is an illustration of the Frame Problem: Everything that is not changed by an action (the "frame") should stay the same
But how would we even write this in logical formulas?
There are several solutions!
One approach: Frame axioms. For each action state what the action leaves unchanged. This may require a lot of extra formulas.
Another approach: Distinguish states and actions, and represent changes as a transition system
We start with a state s0={alive(Fred),¬loaded(Gun)}
, which we can use for logical queries such as
s0⊨alive(Fred)
We have an action load(Gun)
which turns a state into another state:
s1={alive(Fred),loaded(Gun)}
Now we can query s1⊨alive(Fred)
This approach forms the basis for many of the techniques we are going to discuss, especially the planning-based ones
However, it also has drawbacks: If our states only represent individual time steps, we can't (easily) query things like "how long was Fred alive"
Come up with project suggestions, deadline for the proposal: 31/3 AoE. I encourage you to discuss your ideas with me beforehand.
Decide which paper you want to present, and send me an email with your top three (or more) choices by next Friday 22/3.
Sign up for Piazza: http://piazza.com/ucr.ac.cr/spring2019/pf3341
Instructor: Dr. Markus Eger
Email: markus.eger.ucr@gmail.com
Office hours: Tuesday, Friday, 2-2.55pm
Class: Friday, 5-9pm
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |