Dave Mark's book,
Other books Dave Mark
Posts Tagged ‘AIIDE 2009’
This is the rough dump of my notes from Richard Evans’ AIIDE 2009 invited talk on the AI challenges they faced in developing The Sims 3. Some of it was familiar to me as being exactly what he presented as part of our joint lecture at the GDC AI Summit in 2009. Other portions of it were new.
Specifically, I enjoyed seeing more about how they handled some of the LOD options. For example, rather than parsing all the available actions, a sim would decide what lot to go to, then what sim to interact with, and then how to interact. Therefore, the branching factor was significantly more manageable.
Another way they dealt with LOD was in the non-played Sims. Rather than modeling exactly what they were doing when (while off-screen), they made some general assumptions about their need for food, rest, taking a leak, etc. These were modeled as “auto-satisfy” functions. For example, if you met a sim close to dinner time, he would likely be hungry. If you met him a little later, he would present as being full.
Additionally, as you will see below, the entire town had underlying simulation mechanics that balanced how many people were dying and being born, what gender they were (on average), and even where they were moving to and from. They modeled much of this with a very simple geometric interface early on so that they could test their mathematical models. Same with the simple behaviors. He showed video demos of these models in action. This also allowed them to speed up time to ridiculous levels and let the sim run overnight to test for situations that would tip the sim out of balance. Lots of fun!
He also mentioned about how the behavior selection was done. This was important to me in that he showed how they used some of the same techniques that I talk about in my book. Specifically, he uses a utility-based method and selects from the behaviors using weighted randoms of the top n selections. Excellent work, sir!
The following are my raw notes.
AI Challenges in Sims 3
He mentioned the website dedicated to Alice and Kev. The author simply sat back and watched the Sims do their autonomous behavior and wrote about it.
1. Hierarchical Planning
2. Commodity-Interaction Maps
3. Auto-satisfy curves
4. Story progression
Instead of nesting decisions about which act to perform on which person in which lot, you chose a lot first, then chose a person, then chose an action.
O(P + Q + N) instead of O(P * Q * N)
Data-driven approach so that the venues populate appropriately (e.g. restaurants)
If you are full, donâ€™t even consider eating as a possible selection of what to do.
Auto-satisfy curves for LOD. That way you donâ€™t have to simulate the off-screen Sims. Assume that they have eaten at the right times, etc.
Other Sims need to progress through life the same way that your Sim does. Age, marriage, children, career, move, etc. Long-term life-actions are simulated at LOD.
The town has various meta-level desires. (Gender balance so that we donâ€™t have all male or all female.) (Employment rate for the entire town. Some people will be unemployedâ€¦ peaks at ~80-90%
High-level prototype showing the major life actions (not smaller actions). Simulating the town without simulating the people.
Making Sims looking after themselves
Randomly from the n highest-scoring actions
Randomly using the score distribution as the probably distribution (weighted randoms)
Personality and Traits and Motives
Same that he talked about at AI Summit
Traits -> Actions = massively data-driven system to minimize hard-coded systems.
Kantâ€™s categorical imperatives?!?
Emily Short: “The conversation is an end in itself.”
Take Home Actionable items
Take the teime to make good in-game viz. tools
Prove out all simulation ideas using prototypes as soon as possible.
Richard shows excellent 2-D prototype that runs sims without the realistic world!
The following are my rough notes from Michael Mateas’ invited talk. He continued the “Photoshop of AI” debate that was started by Chris Hecker at GDC 2008 and continued at a panel at the 2009 AI Summit at GDC. To sum up what he presented, he basically said it was a non-issue because it was based on a number of false premises. Here are my notes:
Recaps Chris Hecker’s 2008 talk on the subject. Talks about what the panel at the 2009 AI Summit said on the subject.
Photoshop of AI is a mirage:
â€˘ Grossly underestimates the size of the AI problem.
â€˘ Overestimates the intuitiveness of visual art production.
â€˘ Doesn’t take seriously the property of conditional execution.
DOES highlight the authoring problem in AI.
False premise that graphics and AI are set up as equals.
The problem of representing 3D space is the same from game to game. Quake to Prince of Persia to Bioshock.
From AI NPC, e.g. Thief compared to Madden. NOT the same problem. Many examples of how different games pose different problems.
Representing graphics via the texture mapped triangle is infinitesimally small compared to representing AI via code.
Graphics represents the problem of renaissance perspective. Invents a specific style of representation.
AI represents almost everything else. AI is NP-Hard.
Book: Noah Waldrip-Fruin â€“ Expressive Processing
Hecker’s desiderata for style DOF’s
Photoshop is not really “intuitive”. 17,133 results on Amazon for books about how to use Photoshopâ€¦ 1602 since January 2009. So much for intuitive.
Photoshop builds on millennia of practice in the visual arts.
Even if Photoshop is hard to learn, it may still be “intuitive” for trained visual artists.
AI Designer is a new type of artist. We have no pre-existing practice for us to be “intuitive” relative to. “A new kind of artist working in a fully computational medium, focused on the aesthetics of behavior.”
Essence of AI is that it is a conditional process executing over timeâ€¦ not static. The style of such a process lies in its conditional decision making. Different in kind from the static nature of the triangle/texture decomposition.
Spore Galactic Adventuresâ€¦ people could create creatures but not quests. Creation of the creatures was immediate feedback, but creation of questionsâ€¦ only way you can test it is to play it over and over because it is procedural.
Static artifacts support immediate global feedback. What would that look like for AI? You can’t get the static visualization feedback from AI.
Birth of AI was declaration that computer is a general symbol manipulation machine (not just calculator).
Is it data-driven? No
It must be scripted. No.
WTF? (Fairy dust!)
People think either in terms of floats or in imperative code. Instead, FaĂ§ade is based on blendable symbolic behaviors.
Instead of data-driven AI, think in terms of knowledge-driven AI. Classic AI approach is to create a knowledge-representation language that solves a specific problem you have, and then write the reasoning system (interpreter) for that language. Starts to sound like a structure vs. style decomposition.
Related to scripting, but far more general. Every scripting language that he has seen has been imperative â€“ like C.
Agree with Chris that game AI must be authorableâ€¦ BUT, there is no:
â€˘ Universal solution
â€˘ Magic numbers-and-flags only representation
â€˘ Pre-existing artistic tradition we can leverage
How do you know if what you are doing is any good?
Leverage = (Quality * Variability) / Effort
Sims 3 = rule-based language with custom predicates and custom actions for designers to write the rules for world effects and state changes. E.g. “if this social practice is carried out in this situation, what would happen?” Custom KR model (knowledge-driven AI). Not the Photoshop of AI in Chris’ sense because it is not universal. It is domain-specific.
My rough notes from Paul Tozour’s AIIDE 2009 presentation, Game Design: An AI Perspective
Game design from the perspective of AI – the opposite of Will Wright’s AIIDE lecture.
How can AI contribute to the advancement of game design? Using game AI as formal modeling and analysis tools. Not as modeling NPCs.
You stop playing a good game when you stop using enough of your mind to be engaging.
Grinder = repetitive = lack of intellectual diversity.
Zen koan = “Shock the listener out of their established thought patterns.” Makes them think about things in new ways.
“We are building structures inside the player’s mind.”
40 years of game design and we still have no universally accepted standards for game design. No template. No vernacular.
Use AI for profiling and testing game mechanics.
Computational Equivalence – what are the ways to show some sort of similarity between how the human mind functions and how game AI functions.
When we play a game at a high skill level, we use discrete mental structures for each type of task. In our brains, we combine planners, HFSMs, BTs, etc.
Example: Constructing a BT out of player’s play patterns.
What does doing this get you? What can we identify?
How do we engage more of the player’s mind?
Player must do something similar to pathfinding:
Cognitive challenge of navigating the environment. Make the environment move (platformer), parallel worlds, dimensionality, reverse the flow of time (Braid), change the topology of the world (Portal).
Planning in a dynamic world in tactics/strategy. Player must do something similar to influence maps. Change the topology of the world to change the way the player must approach the influence map. Star systems is over network topology rather than over a 2D grid.
Using inputs to make decisions.
Players don’t use specific decision tree structure but rather as a list of rules. (i.e. rule based system or expert system) ID3, C4.5 can take raw data and construct a decision tree.
“Select Target” is great example of classifier. What are all the inputs that we would use to determine which target we would attack when we switch from multiple targets to single target in the WoW BT example above.
“Emergent gameplay refers to complex situations in a video game that emerge from the interaction of relatively simple game mechanics.” – Wikipedia
Emergent possibilities can be overwhelming on the designers to test all the possible uses of the verbs.
“If you haven’t tested it, you have no way of knowing whether it will cause fun or frustration.”
So why is emergent gameplay fun? Because it is all about planning – which is a cognitive challenge that works our mind. It is just like the various forms of path planning – just not through the geometry of the world, but rather through the possibility space.
Use a planner to simulate all the possible ways of chaining verbs together. What can the player possibly discover? By creating the verbs together in a planner sort of way, we can let the AI simulate the solving of puzzles.
Just like game tree, but drawn as a graph because there are multiple paths to any given game state.
Don’t enumerate the entire game tree. Do it by sections first.
Competitive planning (e.g. TF2). Each team is doing their own plan. The cognitive challenge is not just coming up with your own (team’s) plan but rather matching your plan to that of the opposing team as well. What is the zone of control between the two plans? It’s similar to the star system topology. Building an influence map over the state space topology.
The engagement of TF2 is based on the complexity of the two teams struggling against each other in that massive, dynamic state space.
If we can build structures in the player’s mind, we can use our AI structures to analyze them.
What kind of cognitive challenges can we create? AI gives us answers around which we can design our experiment.
Think about how you make decisions. What is the structure? What are the weights? How would you design the AI for what you do in your everyday life? What parts of your mind are NOT being used?
Paul’s presentation was interesting in that it was using our knowledge and understanding of classic game AI algorithms and techniques to expose pros and cons of game design. Rather than talking about how to construct behaviors of game characters, he was using game AI to construct an analogue of the behaviors of a game player.
While the knowledge that this exposes is important, I enjoyed his talk for another reason. I felt that the exercise of putting one’s own actions into the framework of AI is something that more designers and programmers need to do. In fact, it was my excitement in this area that led Paul to add his closing entreaty to the audience… “analyze your own decisions – how would you design an AI algorithm for what you just did or are about to do.” I think that his example (the BT of his WoW play) was an excellent example of walking through this process.