IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Archive for the ‘Technical’ Category

Both My GDC AI Summit Lectures on Utility Theory FREE on GDC Vault!

Wednesday, February 20th, 2013

It turns out that both of the GDC AI Summit lectures that I did with Kevin Dill are up on the GDC Vault for free now! Thanks to the folks at GDC for doing this — it’s a great honor to have my lectures selected as being among the free ones!

The links below will take you to the respective videos on the GDC Vault. If you are planning on attending the 2013 AI Summit at GDC, it will be helpful to watch these first. My lecture this year is a new structure for building utility-based systems using the mathematical concepts expressed in these prior lectures.


From 2010:

gdc_2010

Improving AI Decision Modeling Through Utility Theory
Dave Mark, Intrinsic Algorithm
Kevin Dill, Lockheed Martin

 

Overview:
The ‘if/then’ statement has been the workhorse of decision modeling longer than digital computing. Unfortunately, the harsh transition from yes to no often expresses itself through behavior in ways that are just as harsh. Utility theory has roots in areas such as psychology, economics, sociology, and classical game theory. By applying the science of utility theory with algorithmic techniques such as response curves, population distributions, and weighted randoms, we can improve the modeling of the underlying brain of our agents, broaden the potential decision space, and even manage edge cases that other decision systems stumble over.


From 2012:

gdc12_logoEmbracing the Dark Art of Mathematical Modeling in AI
Dave Mark, Intrinsic Algorithm
Kevin Dill, Lockheed Martin

Overview:
Utility-based AI is a widely-used approach, particularly for games with deeper or more complex behavior. While new users may find utility functions complex and intimidating, experienced users see them as a natural and comfortable way to express behavior. In a follow-up of their 2010 lecture, Kevin Dill and Dave Mark will show how simple problems can be laid out quickly and easily using common design patterns. Additionally, they will show how complex situations can make use of utility functions to express more nuanced behavior. They will then walk through real-world examples, showing how they would be expressed in a utility-based architecture.

AI Architectures: A Culinary Guide (GDMag Article)

Thursday, November 1st, 2012

(This is a slightly edited copy of the feature article I wrote for Game Developer Magazine that appeared in their August, 2012 issue.)

A question that seems to come up regularly from students, designers, and even from veteran programmers is “what AI architecture should I use?” It’s a query that is sprinkled across internet message boards, comes up at GDC dinners, and I’m sure is a frequent topic in pre-production meetings at game studios – indie and AAA alike. As a game AI consultant, I certainly field it from my clients on a regular basis. More often than not, the less-than-satisfying answer to this question is a resounding, “It depends.” What follows resembles something more along the lines of an interrogation than an answer.

It puts me in mind of the days long ago when I was a waiter. (I suppose they call them “servers” now.) People would often ask me, “what do you recommend?” or the even vaguer, “what’s good here?” For those that haven’t had the opportunity to work in the restaurant business, you may not realize that this can be a horribly uncomfortable question to be presented with. After all, most people are well aware that tastes differ… so why would a waiter server be able to ascertain what it is you have a proverbial hankerin’ for? The way out of this situation would be to ask return questions. “Well, how hungry are you? Are you in a hurry? What are you in the mood for? Steak? Chicken? Allergic to peanuts? Oh… you’re vegan? And you need gluten-free, eh? Well that certainly redirects things.” The result of this Q&A is not that I tell them what they want, but rather I help them discover for themselves what it is they really want.

Much the same falls out of the conversations surrounding AI architectures. After all, there isn’t necessarily one best way of doing things. Often, as I mentioned before, it simply depends. What is it you are trying to do? What are your technical limitations? What is your experience? What is your time frame? How much authorial control do your designers need? Really, it is something that you as the developer need to ascertain for yourself. As a waiter, I can only help you ask the right questions of yourself and point you in the right direction once we have the answers.

Unfortunately, many of the available books, articles, and web sites only tell you how the different architectures work. They often don’t tell you the pros and cons of each. That often leads to misguided enthusiasts proclaiming with exuberant confidence, “I will create my [incredible AI] with a [not even remotely appropriate technology]!!” Don’t get me wrong… those resources are often very good at telling you how to work on a technical level with the different methods. What is missing is why you would do so.

At the 2010 GDC AI Summit, we presented a panel-based session entitled “Deciding on an AI Architecture: Which Tool for the Job?” The premise of the panel pitted four different technologies against each other to see which of them was most appropriate to use in four different game AI situations. Each architecture was represented by a person whose job it was to argue for our own and against the others. (Spoiler alert: the 4 game highly-contrived situations were specifically chosen so that each architecture was shown to have a strong point.) Even with the pre-written outcomes designed to steer toward a “best” answer, the three “losing” panelists managed to make the point that their method could work but simply was not the best tool for that job.

But that’s part of the problem, isn’t it? Certainly most AI architectures could manage to muddle through most any AI situation. However, that often leads people to a false sense of security. AI is often complicated enough that some amount of hair-pulling is expected. This often obscures the fact that an inappropriate—or, shall we say, “less than optimal”—method is making things more difficult than it should be. All of this brings us back to our initial question—“what AI architecture should I use?” And, once again, I respond, “it depends.”

Same Stuff—Different Shapes

It’s the same stuff… just different shapes.

If I may tap once again into my food metaphor, selecting an AI architecture is often like selecting Mexican food—for most intents and purposes, it’s the same stuff… just different shapes. Allow me to expound. The contents of most Mexican food can be reduced to some combination of a fairly succinct list of possibilities: tomato, cheese, beans, lettuce, onions… you know the drill. Additionally, the non-vegan among us often elect to include some form of meat (real or faux). Think of these ingredients as the behavior content of our AI. This is the stuff the gives all the flavor and “substance” to the dish. But what about the outside? Often we order our Mexican food in terms of its shape or form—a taco… a burrito… etc. Why do we think in terms of the outside form when it is the stuff that is on the inside that is pretty much the point of the order in the first place?

The answer lies in the fact that the outside—that is, the shell or wrapper of some sort—is merely a content delivery mechanism. For the most part, it exists only as a way of keeping those internals together long enough for consumption. In this way, these “delivery mechanisms” compare to AI architectures. They only serve to package and deliver the tasty behavioral content that we want our players to experience. But why are there so many different forms for such a utilitarian function? And which form do I use? Well, it depends.

The same holds true when we talk about our AI systems. We often speak in terms of the mechanism rather than the content. We are writing a finite state machine, a behavior tree, or a planner, just like we are filling a taco, a burrito, or an enchilada. Once we have decided on that packaging, we can put any (or all!) of the tasty stuff into it that we want to. However… there are pros and cons to each.

The Tostada—Just a Pile of Stuff

Starting simple, let’s look at the tostada. It is, quite literally, about as simple a delivery platform that you can use for our Mexican food. Everything just sits on top of it right where you can see it. You can add and remove ingredients with ease. Of course, you are somewhat limited on how much you can put on since eventually it will start to fall off the sides. Although it is a little difficult to grab initially, if you pick it up correctly, everything stays put. However, it’s not a terribly stable platform. If you tip it in the slightest, you run the risk of sending things tumbling. What’s more, as soon as you start biting into it, you run the risk of having the whole thing break in unpredictable ways at which point, your entire pile of content falls apart. For all intents and purposes, the tostada really isn’t a container at all once you start using it—it’s just a hard flat object that you pile stuff on top of and hope it stays put.

You never know when the entire platform is going to simply fall apart.

From an AI standpoint, the equivalent isn’t even really an architecture. This would be similar to simply adding rules here or there around our code that change the direction of things in a fairly haphazard manner. Obviously, the problem with that is, like the tostada, you can only get so much content before things become unstable. It’s a bit unwieldy as well. Most importantly, every time you take a bite of your content, you never know when the entire platform is going to simply fall apart.

Adding a Little Structure—The State Machine Taco

Our tostada suffered from not having enough structure to hold the content stable before it started to fall off—or fall apart entirely. However, by just being a little more organized about how we arrange things, we can make sure our content is a lot more self-contained. In the Mexican food world, we can simply bend our tostada shell into a curve and, behold, it becomes a taco! The result is that we can not only hold a lot more content, but we can also pick it up, manipulate it, and move it around to where we can get some good use out of it.

Adding a little bit of structure to a bunch of otherwise disjointed rules maps over somewhat to the most basic of AI architectures—the finite state machine (FSM). The most basic part of a FSM is a state. That is, an AI agent is doing or being something at a given point in time. It is said to be “in” a state. Theoretically, an agent can only be in one state at a time. (This is only partially correct because more advanced agents can run multiple FSMs in parallel… never mind that for now. Suffice to say, each FSM can only be in one state.)

The reason this organizes the agent behavior better is because everything the agent needs to know about what it is doing is contained in the code for the state that it is in. The animations it needs to play to act out a certain state, for example, are listed in the body of that state. The other part of the state machine is the logic for what to do next. This may involve switching to another state or even simply continuing to stay in the current one.

The transition logic in any given state may be as simple or as complex as needed. For example, it may simply involve a countdown timer that says to switch to a new state after a designated amount of time. It may be a simple random chance that a new state should be entered. For example, State A might say that, every time we check, there is a 10% chance of transitioning to State B. We could even elect to make the new state that we will transition to a result of a random selection as well—say a 1/3 chance of State B and a 2/3 chance of State C.

More commonly, state machines employ elaborate trigger mechanisms that involve the game logic and situation. For instance our “guard” state may have the logic, “if [the player enters the room] and [is holding the Taco of Power] and [I have the Salsa of Smiting], then attack the player” at which point my state changes from “guard” to “attack”. Note the three individual criteria in the statement. We could certainly have a different statement that says, “if [the player enters the room] and [is holding the Taco of Power] and [I DO NOT have the Salsa of Smiting], then flee.” Obviously, the result of this is that I would transition from “guard” to “flee” instead.

So each state has the code for what to do while in that state and, more notably, when, if, and what to do next. While some of the criteria can access some of the same external checks, in the end each state has its own set of transition logic that is used solely for that state. Unfortunately, this comes with some drawbacks.

Figure 1 – As we add states to a finite state machine, the number of transitions rapidly.

First, as the number of states increases, the number of potential transitions increases as well—at an alarming rate. If you assume for the moment that any given state could potentially transition to any of the other states, the number of transitions increases fairly quickly. Specifically, the number of transitions would be the [number of states] × ([number of states] – 1). In Figure 1, there are 4 states each of which can transition to 3 others for a total of 12 transitions. If we were to add a 5th state, this would increase to 20 transitions. 6 states would merit 30, etc. When you consider that games could potentially have dozens of states transitioning back and forth, you begin to appreciate the complexity.

What really drives that issue home, however, is the realization of the workload that is involved in adding a new state to the mix. In order to have that state accessible, you have to go and touch every single other state that could potentially transition to it. Looking back at Figure 1, if we were to add a State E, we would have to edit states A-D to add the transitions to E. Editing a state’s logic invokes the same problem. You have to remember what other states may be involved and revisit each one.

And the bigger it gets, the more opportunity for disaster.

Additionally, any logic that would be involved in that transition must also be interworked into the other state-specific logic that may already be there. With the sheer numbers of states in which to put the transition logic and the possible complexity of the integration into each one, we realize that our FSM taco suffers from some of the same fragility of the ad hoc tostada we mentioned earlier. Sure, because of its shape, we can pile a little more on and even handle it a little better. One bite, however, could shatter the shell and drop everything into our lap. And the bigger it gets, the more opportunity for disaster.

A Softer Approach—The Behavior Tree

So the problem with the taco is that it can hold a fair bit of content (more than the tostada anyway), but is a bit brittle. It is not the shape of the taco that is at fault as much as it is a result of using the hard shell. If only we could hold the same content delivered in much the same manner, but have our container be less prone to shattering when we put some pressure on it. The answer, of course, is the soft taco. And the analogue in the AI architecture could very well be the behavior tree.

At this point, it is useful to point out the different between an action and a decision. In the FSM above, our agents were in one state at a time—that is, they were “doing something” at any given moment (even if that something was “doing nothing”). Inside each state was decision logic that told them if they should change to something else and, in fact, what they should change to. That logic often has very little to do with the state that it is contained in and more to do with what is going on outside the state or even outside the agent itself. For example, if I hear a gunshot, it really doesn’t matter what I’m doing at the time—I’m going to flinch, duck for cover, wet myself, or any number of other appropriate responses. Therefore, why would I need to have the decision logic for “React to Gunshot” in each and every other state I could have been in at the time?

Figure 2 – In a behavior tree, the decision logic is separate from the actual state code.

This is the strength of the behavior tree. It separates the states from the decision logic. Both still exist in the AI code, but they are not arranged so that the decision logic is in the actual state code. Instead, the decision logic is removed to a stand-alone architecture (Figure 2). This allows it to run by itself—either continuously or as needed—where it selects what state the agent should be in. All the state code is responsible for is doing things that are specific to that state such as animating, changing values in the world, etc.

The main advantage to this is that all the decision logic is in a single place. We can make it as complicated as we need to without worrying about how to keep it all synchronized between different states. If we add a new behavior, we add the code to call it in one place rather than having to revisit all of the existing states. If we need to edit the transition logic for a particular behavior, we can edit it in one place rather than many.

Figure 3 — A simple behavior tree. At the moment the agent has decided to do a ranged attack.

Another advantage of behavior trees is that there is a far more formal method of building behaviors. Through a collection of tools, templates, and structures, very expressive behaviors can be written—even sequencing behaviors together that are meant to go together (Figure 3). This is one of the reasons that Behavior Trees have become one of the more “go-to” AI architectures in games having been notably used in titles ranging from Halo 2 and 3, to Spore.

A detailed explanation of what makes behavior trees work, how they are organized, and how the code is written is beyond the scope of this article. Suffice to say, however, that they are far less prone to breaking their shell and spilling their contents all over your lap. Because the risk of breaking is far less, and the structure is so much more organized, you can also pack in a lot more behavioral content.

For an excellent primer on behavior trees, see Bjoern Knafla’s Introduction to Behavior Trees on #AltDevBlogADay

A Hybrid Taco—The Hierarchical Finite State Machine

Figure 4 – In a hierarchical finite state machine, some states contain other related states making the organization more manageable.

A brief note before we leave the land of tacos behind. One the advantages of the behavior tree—namely the tree-like structure—is sometimes applied to the finite state machine. In the hierarchical finite state machine (HFSM), there are multiple levels of states. Higher-level states will only be concerned with transitioning to other states on the same level. On the other hand, lower-level states inside the parent state can only transition to each other. This tiered separation of responsibility helps to provide a little structural organization to a flat FSM and helps to keep some of the complexity under control.

If we were to place the HFSM into our Mexican metaphor, it would be similar to one of those nifty hard tacos wrapped in a soft taco shell. There’s still only so much you can pile into it before it gets unwieldy, but at least it doesn’t tend to shatter and make as big of a mess.

Building it from Scratch—Planning your Fajita

If you are anything like my wife, you want to be able to choose exactly what is in your Mexican dish—not just overall, but in each bite. That’s why she orders fajitas. While the fajita looks and acts a lot like a taco (specifically, a soft shell one), the method of construction is a little different. Rather than coming as an already assembled construction, the typical method of serving it is to bring you the tortillas and having the content in a couple of separate piles. You then construct your own on the spot like a personal mini buffet. You can choose what you want to put in the first one and then even change it up for the subsequent ones. It all depends on what you deem appropriate for your tastes at that moment.

The AI equivalent of this is the planner. While the end result of a planner is a state (just like the FSM and behavior tree above), how it gets to that state is significantly different.

Like a behavior tree, the reasoning architecture behind a planner is separate from the code that “does stuff”. A planner compares its situation—the state of the world at the moment—and compares it to a collection of individual atomic actions that it could do. It then assembles one or more of these tasks into a sequence (the “plan”) so that its current goal is met.

A planner actually works backwards from its goal.

Unlike other architectures that start at its current state and look forward, a planner actually works backwards from its goal (Figure 5). For example, if the goal is “kill player”, a planner might discover that one method of satisfying that goal is to “shoot player”. Of course, this requires having a gun. If the agent doesn’t have a gun, it would have to pick one up. If one is not nearby, it would have to move to one it knows exists. If it doesn’t know where one is, it may have to search for one. The result of searching backwards is a plan that can be executed forwards. Of course, if another method of satisfying the “kill player” goal is to throw a Taco of Power at it, and the agent already has one in hand, it would likely elect to take the shorter plan and just hurl said taco.

Figure 5 – The planner has found two different methods of achieving “kill player” and selected the shorter one.

The planner diverges from the FSM and BT in that it isn’t specifically hand-authored. Therein lies the difference in planners—they actually solve situations based on what is available to do and how those available actions can be chained together. One of the benefits of this sort of structure is that it can often come up with solutions to novel situations that the designer or programmer didn’t necessarily account for and handle directly in code.

From an implementation standpoint, a major plus of the planner is that a new action can be dropped into the game and the planner architecture will know how to use it. This speeds up development time markedly. All the author says is, “here are the potential things you could do… go forth and do things.”

Of course, a drawback of this is that authorial control is diminished. In a FSM or BT, creative, “outside the box” solutions were the exception from the predictable, hand-authored systems. In a planner, the scripted, predictable moments are the exception; you must specifically override or trick the planning system to say, “no… I really want you to do this exact thing at this moment.”

While planner-based architectures are less common than behavior trees, there are notable titles that used some form of planners. Most famously, Jeff Orkin used them in Monolith’s creepy shooter, F.E.A.R. His variant was referred to as Goal-Oriented Action Planning or GOAP. For more information on GOAP, see Jeff’s page, http://web.media.mit.edu/~jorkin/goap.html

A more recent flavor of planner is the hierarchical task network (or HTN) planner such as was used to great effect in Guerilla’s Killzone 2. For more information on HTN planning in Killzone 2, visit http://aigamedev.com/open/coverage/htn-planning-discussion/

Putting It All in a Bowl—A Utility-Based Salad

Another architecture that is less structured than the FSM or behavior tree is what has been called in recent years simply “utility-based” method. Much like the planner, a utility-based system doesn’t have a pre-determined arrangement of what to do when. Instead, potential actions are considered by weighing a variety of factors—what is good and bad about this?—and selecting the most appropriate thing to do. As you can see, this is similar to the planner in that the AI gets to choose what’s best at the time.

The action with the highest score wins.

Instead of assemblinga plan like the fajita-like planner, however, the utility-based system simply selects the single next bite. This is why it is more comparable to a taco salad in a huge bowl. All the ingredients are in the mix and available at all times. However, you simply select what it is that would like to poke at and eat. Do you want that piece of chicken in there? A tomato, perhaps? An olive? A big wad of lettuce? You can select it based on what you have a taste for or what is most accessible at the moment.

Figure 6 – A typical utility-based system rates all the potential actions on a variety of criteria and selects the best.

One of the more apparent examples of a utility-based AI system is in The Sims. In fact, the considerations are largely shown in the interface itself. The progression of AI architectures throughout The Sims franchise is well documented and I recommend reading up on it. The short version is that each potential action in the game is scored based on a combination of an agent’s current needs and the ability of that action or item to satisfy that need. The agent then uses an approach common in utility-based methods and constructs a weighted sum of the considerations to determine which action is “the best” at that moment. The action with the highest score wins (Figure 6).

While utility-based systems can be used in many types of games, they are more appropriate in situations where there are a large number of potentially competing actions the AI can take—often with no obvious “right answer.” In those times, the mathematical approach that utility-based systems employ is necessary to ferret out what the most reasonable action to take is. Aside from The Sims, other common areas where utility-based systems are appropriate are in RPGs, RTS, and simulations.

Like behavior trees and planners, the utility-based AI code is a reasoner. Once an action is decided up, the agent still must transition to a state. The utility system simply is selecting what state to go to next. In the same way, then, just like those other systems, the reasoning code is all in a single place. This makes building, editing, tuning and tweaking the system much more compartmentalized. Also, like a planner, adding actions to the system is fairly straight forward. By simply adding the action with the appropriate weights, the AI will automatically take it into account and being using it in relevant situations. This is one of the reasons that games such as The Sims were as expandable as they were—the agents simply included any new object into their decision system without any changes to the underlying code.

The system is providing suggestions as to what might be a good idea.

On the other hand, one drawback of a utility system is that there isn’t always a good way to intuit what will happen in a given situation. In a behavior tree, for example, it is a relatively simple exercise to traverse the tree and find the branches and nodes that would be active in a particular situation. Because a utility system is inherently more fuzzy than binary, determining how the actions stack up is often more opaque. That’s not to say that a utility-based AI is not controllable or configurable. In fact, utility systems offer a deep level of control. The difference is that rather than telling the system exactly what to do in a situation, the system is providing suggestions as to what might be a good idea. In that respect, a utility system shares some of the adaptable aspects of planners—the AI simply looks at its available options and then decides what is most appropriate.

For more reading on utility-based systems, please check out my book, Behavioral Mathematics for Game AI or, if you have GDC Vault access, you can view my AI Summit lectures (with Kevin Dill) from 2010 and 2012 entitled, “Improving AI Decision Modeling through Utility Theory” and “Embracing the Dark Art of Mathematical Modeling” respectively.

Wrap it Up—A Neural Network Burrito

The last entry in my (extremely strained) metaphorical cornucopia is the burrito. In the other examples, all the content that was being delivered was open and available for inspection. In the case of the fajita, you (the AI users) were able to assemble what you wanted in each iteration. In the taco salad, the hard and soft tacos, and even the tostada you were able to, as the cliché says, “season to taste”. Perhaps a little extra cheese over the top? Maybe a few extra tomatoes? Even if you didn’t edit the content of your dish, you were at least able to see what you were getting into before you took a bite. There was no mystery about what made up your dinner; everything was open and available for your inspection.

The burrito is different in this respect. You are often told, in general terms, what the burrito is supposed to be packing but the details are often hidden from view. To paraphrase Winston Churchill, “it is a riddle, wrapped in a mystery, inside a soft flour shell.” While the burrito (and for that matter, the neural network) is extremely flexible, you have absolutely no idea what is inside or what you are going to get in the next bite. Don’t like sour cream? Olives? Tough. If it’s in there, you won’t know until you take that bite. At that point, it is too late. There is no editing without completely unwrapping the package and, for all intents and purposes, starting from scratch.

This is the caveat emptor of the NN-based AI solution. As a type of “learning” AI, neural nets need to be trained with test or live performance data. At some point you have to wrap up the training and say, “This is what I have”. If a designer wanders in, looks over your shoulder and says, “It looks pretty cool, but in [this situation] I would like it to do [this action] a little a little more often,” there’s really nothing you can do to change it. You’ve already closed your burrito up. About all you can do is try to retrain the NN and hope for the best.

So while the NN offers some advantages in being able to pile a lot of things into a huge concoction of possibilities, there are huge disadvantages in a lack of designer control after the fact. Unfortunately, this tends to disqualify NNs and other machine-learning solutions from consideration in the game AI environment where that level of control is not only valuable but often a requirement.

That said, there have been a few successful implementations of NNs in games—for example, Michael Robbins used NNs to improve the tactical AI of Supreme Commander 2 from Gas Powered Games. (For those with GDC Vault access, you can see more about his implementation of this in the AI Summit session, “Off the Beaten Path: Non-Traditional Uses of AI”.)

Browsing the Buffet

So we’ve covered a variety of architectures and, for what its worth, equated them with our Mexican delights. Going back to our premise, as far as the content goes, it’s all the same stuff. The difference is simply the delivery mechanism—that is, the shape of the wrapping—and the associated pros and cons of each. This has by no means been an exhaustive treatment of AI architectures. The purpose was simply to expose you to the options that are out there and why you may or may not want to select each for the particular tastes and needs of your project. To sum up, though, let’s go through the dishes once again…

You can certainly just throw the occasional rule into your code that controls behavior—but that’s not really an “architecture”. By organizing your AI into logical chunks, you can create a finite state machine. A FSM is relatively easy to construct and for non-programmers to understand. While this is good for simple agents with a limited number behaviors, it gets brittle quickly as the number of states increases. By organizing the states into the logical tiers of a hierarchical finite state machine (HFSM), you can mitigate some of this complexity.

By removing the reasoning code from the states themselves, you can gain a lot more flexibility. By organizing behaviors into logically similar branches, you can construct a behavior tree. BTs are also fairly easy for designers and other non-programmers to understand. The main advantages, however, are not only the ease with which they can be constructed but how well they scale without the need for a lot of extra programming. Once the BT structure is in place, new branches and nodes can be added easily. However, despite how robust BT implementations can get, it is still a form of hand-authored of scripting—“when X, do Y”.

Figure 7 — The type of architecture you select needs to be based on YOUR needs.

Like the behavior tree, a planner allows for very extensible creation. However, where the BT is more hand-authored by designers, a planner simply “solves” situations using whatever it feels is best for the situation. This can be powerful, but also leads to a very scary lack of control for designers.

Similarly, utility-based systems depart from the specific script approach and allow the characters to decide freely what to do and, as above, might unsettle some designers. They are incredibly expandable to large numbers of complex factors and possible decisions. However, they are slightly more difficult to intuit at times although tools can be easily built that aid that process.

The ultimate hands-off black box in the neural network. Even the programmers don’t know what’s going on inside their little neurons at times. The good news is that they can often be trained to “do what human players would do.” That aspect itself holds a lot of appeal in some genres. They are also a little easier to build since you only have to construct the training mechanism and then… well… train it.

There is no “one size fits all” solution to AI architectures.

The point is, there is no “one size fits all” solution to AI architectures. (You also don’t have to limit yourself to a single architecture in a game.) Depending on your needs, your team, and your prior experience, any of the above may be The Right Way for you to organize your AI. As with any technical decision, the secret is to research what you can, even try a few things out, and decide what you like. If I am your waiter AI consultant, I can help you out… but the decision is ultimately going to be what you have a hankerin’ for.

Now let’s eat!

(The author would like to dedicate this article to all the GDC, Gamasutra, and GDMag staff who are still lamenting the departure of the beloved Mexican restaurant that used to be located in their building. I mourn with you, my friends.)

Getting More Behavior out of Numbers (GDMag Article)

Friday, December 2nd, 2011

(This column originally appeared in the January 2011 edition of Game Developer Magazine.)

We have long been used to numbers in games. Thousands of years ago, when people first started playing games, the simple act of keeping score was dealt with in terms of numbers. Even before games, when people had to simply scratch by for food, numbers were an integral part. From how many rabbits the hunter killed to how many sling stones he had left in his pouch, numbers have been a part of competitive activity for all of history.

“He’s only mostly dead.”

Coming back to the present, numbers are everywhere for the game player. Some are concrete values that have analogs in real life: How much ammo do we have? How many resources will this take to build? What is the range of that weapon? Some are a little more nebulous—if not contrived altogether: What level am I? What condition are my gloves in? What armor bonus does this magic ring afford? And, although the medical community might be a bit startled by the simplicity, games often parade a single number in front of us to tell us how much “life” we have. (“He’s only mostly dead.”) Suffice to say that we have educated our gaming clientele to forgive this intentional shredding of the coveted suspension of disbelief. Even though some games have attempted to obscure this relationship by removing some of the observable references to this numeric fixation, gamers still recognize that the numbers are still there, churning away behind the Wizard’s curtain.

Numbers in Games

As programmers, our fixation with numbers is not coincidental. After all, along with logic, mathematics is the language of our medium. Computers excel in their capacity to crunch all these numbers. They’re in their element, so to speak. This capacity is a primary reason that the pen-and-paper RPGs of the 70s and 80s computers so energetically made the transition to the computer RPGs of the 80s and beyond. Resolving combat in Ultima (1980) and Wizardry (1981) was far swifter that shuffling through charts, tables, and scribbles on scratch paper in D&D.

So numbers in games aren’t going away any time soon—whether in overt use or under the hood. The interesting, and sometimes even disconcerting thing, is that they aren’t used more often. Even with all of the advances and slick architectures that artificial intelligence programmers use, we too often fall back on the most simple of logical constructs to make our decision code. The most obvious example is the venerable if/then statement. Testing for the existence of a criterion is one of the core building blocks of computational logic. “If I see the player, attack the player.” Certainly, this sort of binary simplicity has its place. Left to itself, however, it can fall woefully short of even being adequate.

The answer lies not in the existence of numbers in our world, but what those numbers represent and, ultimately, how we use them.

We could extend the above example by putting a number in the equation such as “If the player is < 30 [units of distance] away from me, attack him.” But really what have we gained? We are still testing for the existence of a criterion – albeit one that is defined elsewhere in the statement or in the program. After all, “see the player” and “player < 30” are simply functions. There is still a razor edge separating the two potential states of “idle” and “attack”. All subtlety is lost.

So how might we do things differently? The answer lies not in the existence of numbers in our world, but what those numbers represent and, ultimately, how we use them.

Looking Inward

Stop for a moment and do a self-inventory. Right now, as you sit reading this column, are you hungry? Sure, for the sake of simplicity, you may answer “yes” or “no”. However, there is usually more to it than that. When my daughter was younger, she tended to cease using “hungry” when she was no longer empty. (This usually meant that she ate two or three bites only to come back wanting more about 20 minutes later.) I, on the other hand, could easily see myself defining “hungry” as “no longer full”. My wife has the penchant for answering somewhat cryptically, “I could be.” (This is usually less convenient than it sounds.)

All of this makes sense to us on an intuitive level. “Hunger” is a continuum. We don’t just transition from “completely full” to “famished”—it is a gradient descent. What we do with that information may change, however, depending on where we are in that scale. For example, we can make judgment decisions such as “I’m not hungry enough to bother eating right now… I want to finish writing this column.” We can also make comparative judgments such as “I’m a little hungry, but not as much as I am tired.” We can even go so far as use this information to make estimates on our future state: “I’m only a little hungry now, but if I don’t eat before I get on this flight, my abdomen will implode somewhere over Wyoming. Maybe I better grab a snack while I can.”

The subtlety of the differences in value seems to be lost on game characters.

Compare this to how the AI for game characters is often written. The subtlety of the differences in value seems to be lost on them. Soldiers may only reload when they are completely out of ammunition in their gun despite being in the proverbial calm before the storm. Sidekicks may elect to waste a large, valuable health kit on something that amounts to a cosmetically unfortunate skin abrasion. The coded rules that would guide the behaviors above are easy for us to infer:

if (MyAmmo == 0)
{
Reload;
}

if (MyHealth < 100)
{
UseHealthKit;
}

Certainly we could have changed the threshold for reloading to MyAmmo <= 5, but that only kicks the can down the road a bit. We could just have easily found our agent in a situation where he had 6 remaining bullets and, to co-opt a movie title, all was quiet on the western front. Dude… seriously—you’re not doing anything else right now, might as well shove some bullets into that gun. However, an agent built to only pay homage to the Single Guiding Rule of Reloading would stubbornly wait until he had 5 before reaching for his ammo belt.

Additionally, there are other times when a rule like the above could backfire (so to speak) with the agent reloading too soon. If you are faced with one final enemy who needs one final shot to be dispatched, you don’t automatically reach for your reload when you have 5 bullets. You finish off your aggressor so as to get some peace and quiet for a change.

Very rarely do we humans, make a decision based solely on a single criteria.

Needless to say, these are extraordinarily simplistic examples, and yet most of us have seen behavior similar to this in games. The fault doesn’t rest in the lack of information—as we discussed, often the information that we need is already in the game engine. The problem is that AI developers don’t leverage this information in creative ways that are more indicative of the way real people make decisions. Very rarely do we humans, make a decision based solely on a single criteria. As reasonable facsimiles of the hypothetical Homo economicus, we are wired to compare and contrast the inputs from our environment in a complex dance of multivariate assessments leading us to conclusions and, ultimately, to the decisions we make. The trick, then, is to endow our AI creations with the ability to make these comparisons of relative merit on their own.

Leveling the Field

So how do we do this? The first step is to homogenize our data in such a way as to make comparisons not only possible, but simple. Even when dealing with concrete numbers, it is difficult to align disparate scales.

Consider a sample agent that may have maximums of 138 health points and 40 shots in a fully-loaded gun. If at a particular moment had 51 health and 23 bullets in the gun, we wouldn’t necessarily know at first glance which of the two conditions is more dire. Most of us would instinctively convert this information to a percentage—even at a simple level. E.g. “He has less than half health but more than half ammo.” Therein lays our first solution… normalization of data.

My gentle readers should be familiar with the term normalization in principle, if not the exact usage in this case. Put simply, it is restating data as a percent—a value from 0 to 1. In the case above, our agent’s health was 0.369 and his ammo status 0.575. Not only does viewing the data this way allow for more direct comparisons—e.g. 0.575 > 0.369—but it has the built-in flexibility to handle changing conditions. If our agent levels up, for example and now has 147 health points, we do not have to take this change into consideration into our comparison formula. Our 51 health above is now 0.347 (51 ÷ 147) rather than 0.369. We have detached the comparison code from any tweaking we do with the actual construction of the values themselves.

But What Does It Mean?

Value expresses a concrete number. Utility expresses a concept.

Normalization, however, only sets the state for the actual fun stuff. Simply comparing percentages between statuses like “health” and “ammo” usually isn’t sufficient to determine the relative importance of their values. For example, I posit that being at 1% health is measurably more urgent than being down to 1% ammo. Enter the concept of utility.

Utility is generally a different measure than simply value. Value, such as our normalized examples above, expresses a concrete number. Utility, on the other hand, expresses a concept. In this case, the concept we are concerned with is “urgency”. While it is related to the concrete values of health and ammo, urgency is its own animal.

“What does this value mean to me?”

The easiest way of doing this is by creating a “response curve”. Think of passing the actual numbers through a filter of “what does this value mean to me?” That is what converting value to utility is like. This filter is usually some sort of formula that we use to massage the raw data. Unlike a lookup table of ranges (such as “ammo ≤ 5”), we have the benefit of continuous conversion of data. We will see how this benefits us later.

The selection of the formula needs to take into consideration specific contour of the translation from value to utility. There are innumerable functions that we can use, but they are all built out of a few simple building blocks. Each of these blocks can be stretched and squished in and of themselves, and combining them together results in myriad combinations.

The first filter that we can run our numbers through is simply a linear conversion. (For these examples, I will simply use the standard x and y axis. I’ll occasionally through in an example of what they could represent.) Consider the formula:

y = 0.8x + 2

This results in a line running from our co-maximum values of (1.0, 1.0) and arrives at y = 0 when x = .2. (See Figure 1.) Put another way, we want a steady descent in our utility (y) at a somewhat quicker rate than the decent of the actual value (x). We could have done something similar by changing the formula to:

y = 0.8x

As this point, the line extends from (1.0, 0.8) to (0, 0).

Figure 1

Obviously changing the slope of the line—in this case, 0.8—would change the rate that the utility changes along with the value (x). If we were to change it to 1.2, for example, the rate of descent would increase significantly. (See Figure 2.)

y – 1.2x − .2

It’s worth noting here that these formulas are best served by being combined with a clamping function that ensures that 0.0 ≤ y ≤ 1.0. When we take that into consideration, we have another feature to identify here: when x < 0.2, y is always equal to 0.

On the other hand, consider the similar formula:

y = 1.2x

This exhibits the same effect with the exception that now the “no effect” zone is when x > 0.8. That is, the utility doesn’t start changing until our value < 0.8.

These effects are useful for expressing the situations where we simply do not care about changes in the utility at that point.

Figure 2

Enter the Exponent
The simple formulas above merely set the stage for more advanced manipulations. For example, imagine a scenario where the meaning of something starts out as “no big deal”, yet becomes important at an increasing rate. The state of the ammo in our gun that we illustrated above makes an excellent example. In this case, the value is simply the number of shots remaining whereas the utility value is our urgency to reload.

 

Analyzing this outside the realm of the math—that is, how would we behave—gives us clues as to how we should approach this. Imagine that our gun is full (let’s assume 40 shots for convenience)… and we fire a single shot. All other things being equal, we aren’t likely to get too twitchy about reloading. However, firing the last shot in our gun is pretty alarming. After all, even having 1 shot left was a heckuva lot better than having none at all. At this point, it is helpful to start from those two endpoints and move toward the center. How would we feel about having 35 shots compared to 5? 30 compared to 10? Eventually, we will start to see that we only really become concerned with reloading when we our ammo drops gets down to around 20 shots—at that below that, things get urgent very quickly!

In a simple manner, this can be represented by the following formula:

y = (x − 1)2

As we use up the ammo in our gun (x), there is still an increase in the utility of reloading, but the rate that the utility increases is accelerating. (See Figure 3.) This is even more apparent when we change the exponent to higher values such as 3 or 4. This serves to deepen the curve significantly. Note that a version of the formula with odd exponents would require an absolute value function so as to avoid negative values.

Figure 3

Another quick note about manipulating these formulas. We could turn the above curves “upside down” by arranging it as follows:

y = (1 − x)2

Looking at the chart (Figure 4) shows that this version provides a significantly different behavior—an agent who has a very low tolerance for having an empty gun, for example!

Figure 4

By manipulating how the function is arranged, we can achieve many different arrangements to suit our needs. We can shift the function on either axis much as we did the linear equations above, for example. (See Figure 5.)

Figure 5

We can specify where we want the maximum utility to occur—it doesn’t have to be at either end of the scale. For example, we might want to express a utility for the optimal distance to be away from an enemy based on our weapon choice. (See Figure 6.)

y = 2(1 − |(x − 0.3)|2)

Figure 6

Soft Thresholds
While we can certainly get a lot of mileage out of simple linear and exponential equations, one final class of formulas is very useful. Sigmoid functions, particularly the logistic function, can be used to define “soft thresholds” between values. In fact, logistic functions are often used as activation functions in neural networks. Their use here, however, is much less esoteric.

The base logistic function is:

y = 1 / (1 + e-x)

While the base of the natural logarithm, e, is conspicuous in the denominator of the fraction, it is really optional. We can certainly use the approximation of 2.718 in that space, 2, 3, or any other number. In fact, by changing the value for e, we can achieve a variety of different slopes to the center portion of the resulting curve. As stated, however, the formula graphs out as shown in Figure 7.

Figure 7

Notice that, unfortunately, the graph’s natural range is not 0–1 as with our other examples. In fact, the range of the graph is infinite in that it asymptotically approaches both y=0 and y=1. We can apply some shifting to get it to fit the 0–1 range, however, so that we can use it with normalized values of x. We can also change the area of the graph where the threshold occurs by changing what we are adding to the exponent.

y = 1 / (1 + e–(10x – 5))

Comparing and Contrasting

We can line up dozens — or even hundreds — of utilities for various feelings or potential actions.

The end result of all of this is that we can create very sophisticated response curves that translate our raw values into meaningful utility values. Also, because these end products are normalized, we can now easily compare and contrast them with other results. Going back to the examples I cited early on, we can decide how hungry we are in relation to other feelings such as tired (or too busy finishing a last-minute column for a magazine). In fact, we can line up dozens—or even hundreds—of utilities for various feelings or potential actions and select from among them using techniques as simple as “pick the highest” to seeding weight-based randoms.

Compare this to what we would have to do were we not to use the normalized utility values. In our hungry/tired/busy example, we normally would have had to construct a multi-part condition to define each portion of our decision. For example:

If ( (Hungry > 5) && (Tired < 3) && (Busy < 7) ) then
{
Eat();
}

If ( (Hungry < 4) && (Tired > 6) && (Busy < 3) )then
{
Sleep();
}

If (…

Ya know what? Never mind…

Even if the above values were normalized (i.e. between 0 and 1), the complexity explosion in simply comparing the different possible ranges and mapping them to the appropriate outcome would get out of hand quickly. And that’s just with 3 inputs and 3 outputs! By converting from value to utility, we massage what the data “means to us” inside each response curve. We now can feel comfortable that a direct comparison of the utilities will yield which item is truly the most important to us.

The system is extensible to grand lengths as well. If we want to include a new piece of information or a new action to take into account, we simply need to add it to a list. Because all the potential actions scored and sorted by their relative benefit, we will automatically take newcomers into stride without much (if any) adjustment to the existing items.

The Sims is an excellent example of how complex utility-based functions can be used.

If calculating and measuring all of these feelings and desires is starting to sound a lot like The Sims, it is not a coincidence. The Sims is an excellent (but not the only) example of how complex utility-based functions can be used to simulate fairly reasonable, context-dependent, decision processes in agents. Richard Evans has spoken numerous times at GDC on this very subject. I wholeheartedly recommend reading his papers and viewing his slides on the subject.

The uses of these methods aren’t limited to that genre, however. Strategy games, in particular, lend themselves to more nuanced calculation. Even in modern shooters and RPGs, agents are expected to make increasingly more believable decisions in environments that contain significantly more information. Our AI no longer has the luxury of simply leaning on “if I see the player, shoot him!” as its sole guideline and building static rulesets that address all the possible permutations of world state gets brittle at an exponential pace.

However, as I’ve illustrated (ever so briefly) the inclusion of some very simple techniques lets us step away from these complicated, often contrived, and sometimes even contradictory rulesets. It also allows us, as AI designers, to think in familiar terms of “how much”—the same terms that we often use when we think of our own (human) states. The numbers we need are there already. The magic is in how we use them.

You can find all of the above and more in my book, Behavioral Mathematics for Game AI.

Lydia vs. the Gate: Will she ever learn?

Monday, November 21st, 2011

10 days in and I still have not played Skyrim. I’ve been too busy. However, that doesn’t stop me from seeing what other people have pointed out. Given that it is a PC-playable game, there are no shortage of YouTube videos out showing many of the plusses and minuses of it. (There seem to be plenty of both.) If I took the time to analyze each one that I saw, I would never get anything done and would have even less time to play it myself. That said, some things are too easy to pass up.

This video came to my attention via someone on Twitter and I thought it was worth a mention.

 

This is something that is so easily fixed that it is spectacular that this even occurs.

Obviously, our poor Lydia is having a difficult time with this gate trap. The problem is, she really shouldn’t. While we can understand Lydia getting whacked the first time (after all, that’s what traps are all about, right?) why is it that she persists in trying to go through the same area? This is something that is so easily fixed — even with likely existing tech and logic — that it is spectacular that this even occurs.

The short version of why this is happening can likely be summed up as follows:

  1. The pathfinding engine in Skyrim is a waypoint graph rather than a navmesh. The edge that she is trying to follow runs right through the middle of that gate and, therefore, right over the trap trigger.
  2. Even when knocked off the graph, her top movement priority is to get back on the path at the nearest node. This is why she moves to the center of the hall instead of just moving along the left (her left, our right) wall towards the player.
  3. She has no recollection of getting hit by the gate. Therefore, nothing about her processing is different in each iteration.
  4. Even if she recalled that the gate is the problem and was able to understand that the trigger stone was the issue, on a waypoint graph she has no way to really steer around the stone anyway.
  5. When she is stuck behind the gate against the wall, she has no realization that she is stuck… therefore, she keeps “running”.

As you can tell, this could be remedied fairly simply. First, for the pathfinding issue, a navmesh would be very helpful here. (For an excellent treatment on waypoint graphs vs. navmeshes, see Paul Tozour’s post on Game/AI, Fixing Pathfinding Once and for All.) That way, the stone could be created as a separate mesh polygon and, upon discovery, marked as as something to avoid.

Of course, the above is premised that the stone can be “discovered” in the first place. Certainly, Lydia managed to “discover” the stone when she got whacked the first time. What she failed to do was make a mental note (in an e- sort of way) of its existence. It is at this point that the AI started to look stupid. Again, not really all that hard to handle. In fact, by simply doing what I suggested above (marking up the navmesh as being unusable), this becomes implied by her subsequent behavior of avoiding the stone. No animations, voice barks, etc. needed. She just doesn’t do the same thing twice.

The being stuck behind the gate thing is actually a separate problem entirely and I won’t bother to address the details here. Suffice to say, however, that it is partially similar in that it is based on the notion that NPCs rarely have a sense of “futility”.

Anyway, I just thought that this was worthy of note specifically because the solution is so easy to implement. It makes me wonder why major studios can advance so much in some aspects of their AI and yet have such glaring holes in other areas. I suppose that’s why we now have the AI Game Programmers Guild and the GDC AI Summit. We are trying to share enough information between us that we are lifting the floor of game AI as a whole.

Fritz Heckel’s Reactive Teaming

Tuesday, May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

Promised AI Count in Halo Reach

Sunday, February 14th, 2010

I was looking at my daily barrage of Google alerts on “game AI” (which tend to contain an annoying number of links to stories about Alan Iverson) and this article blurb from the Bitbag happened to catch my eye. It’s a preview of Halo Reach and seems to be a fairly thorough treatment. They talk about a lot of the different gameplay elements and how they differ from prior games in the franchise. They go into great detail about a lot of things. There was only a little bit of info about the AI, however. It said:

Bungie wants this game to feel a lot like Combat Evolved. They want Reach to be filled with open environments filled with enemies and allow you to figure out how you want to deal with the situation. There will be corridor battles like what we’ve seen in past Halos, but that will be balanced with the terrain of Reach. Reach will have a full weather system as well as Bungie saying they will have “40 AI and 20 vehicles” on screen at a time.

I thought that was kind of interesting simply because my first reaction was “is that all?” After a moment’s reflection, I realized that the number of AI on the screen in prior Halo games – and even in other shooters – is usually along the lines of a dozen… not 2 score.

On the other hand, it a game like Assassin’s Creed (my Post-Play’em observations), there were plenty of AI on-screen. However, the majority of them were just the citizens who, for the most part, weren’t doing much beyond animation and local steering (until you engaged them for some reason). The question about Bungie’s promise above, then, is how much level of detail scaling will there be with those 40 on-screen AI characters?
Typical LOD scaling has a tiered system such as:
  • Directly engaged with the player
  • On-screen but not engaged
  • Off-screen but nearby
  • On-screen but distant
  • Off-screen and distant

Each of those levels (in order of decreasing AI processing demand) has a list of things that it must pay attention to or a different timer for how long to wait between updates. How much of this is Bungie tapping into with Reach, or all they all running at the same LOD?
I know that the AI guys at Bungie are pretty sharp and go out of their way to pull of nifty stuff. In fact, ex-Bungie AI lead, Damián Isla just did an interview with AIGameDev on blackboard architectures (my writeup) where he explained how a lot of the Halo 2 and 3 AI was streamlined to allow for better processing of many characters. I’m quite sure that much of that architecture survives in Halo Reach.
Anyway, I just thought it was interesting to see the promised numbers. It will be even more interesting to see how the marketing blurb actually plays out on the screen in the final product.

Random Map Generation for Warzone 2100

Friday, February 12th, 2010

I ended up on this page via a tweet as I was ingesting my morning caffeine (meaning it was a sort of idle, random click). What I found, however, was strangely compelling. The game, Warzone 2100, seems to be a small indie project of some sort having to do with finding oil wells as resources in a post-apocalyptic world. The page I was linked straight into, however, was specifically about their random map generation tool. Specifically, I read the “about” page on a piece called “Diorama” and how it works. Interesting stuff.

The page starts out with this blurb:

This article is part technical documentation, part feature list and also part FAQ. It intends to explain why Diorama was written the way it was, why `it takes so long’ and what it can do that would be extremely hard for other random map generators to achieve.

I had to wonder if that was meant to be pretentious or not. After all, random map generation has been around for quite some time – and with excellent results in some cases (e.g. I was never disappointed with a random map in Empire Earth).

Anyway, it steps through the procedures that they use for generating the random terrain, complete with cliffs, etc., adding roads, textures, and interesting features.
The most important thing that they emphasize (IMHO) is how they transition from the blocky “first pass” into a more natural looking layout. Both for cliffs and roads they used the word “jitter” somewhat often to explain how they fuzzied things up. The before and after shots of this effect shows how much of an effect it has.
I think one of the more compelling mentions on this page, however, was that they are attempting to use answer set programming (ASP) to address the initial set-up the starting locations for players and the oil wells. From their description:
ASP is a declarative approach to solving search problems, so you write a description of the problem (in a logic like language) and then give this description to a solver (kind of like a theorem prover) which will come up with valid model (solution) of the problem. But not just any solution, we will arrive at the optimal solution, and we can prove that the solution is optimal. The down side is that generating this can take exponential time (this is why requesting very large maps can take a while) but it allows both local (“a base must have a given number of entrances”) and global (“every base must be able to reach every other base”) conditions on the map to be expressed simply, cleanly and efficiently. Critically we don’t have to change any of the algorithms when the conditions on the map change, we just change the description that gets input into the solver.

I don’t know if that is necessarily the best approach for this. Does it work? Probably. Is it overkill? Maybe. They provide other descriptions of methods that could be used (they even mention genetic algorithms) prior to offering ASP as a solution. I think, however, that there were some serious gaps in that list. I don’t want to get too deep into how I would tackle the problem… after all, I still don’t have enough caffeine in me. That’s not the point of this anyway… I just want to give kudos to they for looking into novel solutions.

Anyway, check it out. It’s an interesting read.

AIIDE 2009 – AI Challenges in Sims 3 – Richard Evans

Thursday, October 29th, 2009
This is the rough dump of my notes from Richard Evans’ AIIDE 2009 invited talk on the AI challenges they faced in developing The Sims 3. Some of it was familiar to me as being exactly what he presented as part of our joint lecture at the GDC AI Summit in 2009. Other portions of it were new.

Specifically, I enjoyed seeing more about how they handled some of the LOD options. For example, rather than parsing all the available actions, a sim would decide what lot to go to, then what sim to interact with, and then how to interact. Therefore, the branching factor was significantly more manageable.

Another way they dealt with LOD was in the non-played Sims. Rather than modeling exactly what they were doing when (while off-screen), they made some general assumptions about their need for food, rest, taking a leak, etc. These were modeled as “auto-satisfy” functions. For example, if you met a sim close to dinner time, he would likely be hungry. If you met him a little later, he would present as being full.
Additionally, as you will see below, the entire town had underlying simulation mechanics that balanced how many people were dying and being born, what gender they were (on average), and even where they were moving to and from. They modeled much of this with a very simple geometric interface early on so that they could test their mathematical models. Same with the simple behaviors. He showed video demos of these models in action. This also allowed them to speed up time to ridiculous levels and let the sim run overnight to test for situations that would tip the sim out of balance. Lots of fun!
He also mentioned about how the behavior selection was done. This was important to me in that he showed how they used some of the same techniques that I talk about in my book. Specifically, he uses a utility-based method and selects from the behaviors using weighted randoms of the top n selections. Excellent work, sir!
The following are my raw notes.
AI Challenges in Sims 3
Richard Evans
He mentioned the website dedicated to Alice and Kev. The author simply sat back and watched the Sims do their autonomous behavior and wrote about it.
1. Hierarchical Planning
2. Commodity-Interaction Maps
3. Auto-satisfy curves
4. Story progression
Instead of nesting decisions about which act to perform on which person in which lot, you chose a lot first, then chose a person, then chose an action.
O(P + Q + N) instead of O(P * Q * N)
Data-driven approach so that the venues populate appropriately (e.g. restaurants)
Optimization
If you are full, don’t even consider eating as a possible selection of what to do.
Auto-satisfy curves for LOD. That way you don’t have to simulate the off-screen Sims. Assume that they have eaten at the right times, etc.
Other Sims need to progress through life the same way that your Sim does. Age, marriage, children, career, move, etc. Long-term life-actions are simulated at LOD.

The town has various meta-level desires. (Gender balance so that we don’t have all male or all female.) (Employment rate for the entire town. Some people will be unemployed… peaks at ~80-90%
High-level prototype showing the major life actions (not smaller actions). Simulating the town without simulating the people.
Making Sims looking after themselves

Utility modeling
Highest-scoring
Randomly from the n highest-scoring actions
Randomly using the score distribution as the probably distribution (weighted randoms)
Personality and Traits and Motives
Same that he talked about at AI Summit
Traits -> Actions = massively data-driven system to minimize hard-coded systems.
Kant’s categorical imperatives?!?
Emily Short: “The conversation is an end in itself.”
Take Home Actionable items
Data-drive everything!
Take the teime to make good in-game viz. tools
Prove out all simulation ideas using prototypes as soon as possible.
Richard shows excellent 2-D prototype that runs sims without the realistic world!

AIIDE 2009 – The Photoshop of AI Debate – Michael Mateas

Monday, October 19th, 2009

The following are my rough notes from Michael Mateas’ invited talk. He continued the “Photoshop of AI” debate that was started by Chris Hecker at GDC 2008 and continued at a panel at the 2009 AI Summit at GDC. To sum up what he presented, he basically said it was a non-issue because it was based on a number of false premises. Here are my notes:

Recaps Chris Hecker’s 2008 talk on the subject. Talks about what the panel at the 2009 AI Summit said on the subject.

Photoshop of AI is a mirage:
Grossly underestimates the size of the AI problem.
Overestimates the intuitiveness of visual art production.
Doesn’t take seriously the property of conditional execution.
DOES highlight the authoring problem in AI.
False premise that graphics and AI are set up as equals.
The problem of representing 3D space is the same from game to game. Quake to Prince of Persia to Bioshock.
From AI NPC, e.g. Thief compared to Madden. NOT the same problem. Many examples of how different games pose different problems.

Representing graphics via the texture mapped triangle is infinitesimally small compared to representing AI via code.
Graphics represents the problem of renaissance perspective. Invents a specific style of representation.
AI represents almost everything else. AI is NP-Hard.
Book: Noah Waldrip-Fruin – Expressive Processing
Hecker’s desiderata for style DOF’s
Intuitive
Expressive
Frugal
Blendable
Efficient
Photoshop is not really “intuitive”. 17,133 results on Amazon for books about how to use Photoshop… 1602 since January 2009. So much for intuitive.
Photoshop builds on millennia of practice in the visual arts.
Even if Photoshop is hard to learn, it may still be “intuitive” for trained visual artists.
AI Designer is a new type of artist. We have no pre-existing practice for us to be “intuitive” relative to. “A new kind of artist working in a fully computational medium, focused on the aesthetics of behavior.”
Essence of AI is that it is a conditional process executing over time… not static. The style of such a process lies in its conditional decision making. Different in kind from the static nature of the triangle/texture decomposition.
Spore Galactic Adventures… people could create creatures but not quests. Creation of the creatures was immediate feedback, but creation of questions… only way you can test it is to play it over and over because it is procedural.
Static artifacts support immediate global feedback. What would that look like for AI? You can’t get the static visualization feedback from AI.
Birth of AI was declaration that computer is a general symbol manipulation machine (not just calculator).
Re: Façade
Is it data-driven? No
It must be scripted. No.
WTF? (Fairy dust!)
People think either in terms of floats or in imperative code. Instead, Façade is based on blendable symbolic behaviors.
Instead of data-driven AI, think in terms of knowledge-driven AI. Classic AI approach is to create a knowledge-representation language that solves a specific problem you have, and then write the reasoning system (interpreter) for that language. Starts to sound like a structure vs. style decomposition.
Related to scripting, but far more general. Every scripting language that he has seen has been imperative – like C.
Agree with Chris that game AI must be authorable… BUT, there is no:
Universal solution
Magic numbers-and-flags only representation
Pre-existing artistic tradition we can leverage
How do you know if what you are doing is any good?
Authorial Leverage…
Leverage = (Quality * Variability) / Effort
Sims 3 = rule-based language with custom predicates and custom actions for designers to write the rules for world effects and state changes. E.g. “if this social practice is carried out in this situation, what would happen?” Custom KR model (knowledge-driven AI). Not the Photoshop of AI in Chris’ sense because it is not universal. It is domain-specific.

AIIDE 2009 – Game Design: An AI Perspective – Paul Tozour

Sunday, October 18th, 2009

My rough notes from Paul Tozour’s AIIDE 2009 presentation, Game Design: An AI Perspective

Game design from the perspective of AI – the opposite of Will Wright’s AIIDE lecture.

How can AI contribute to the advancement of game design? Using game AI as formal modeling and analysis tools. Not as modeling NPCs.

You stop playing a good game when you stop using enough of your mind to be engaging.

Grinder = repetitive = lack of intellectual diversity.

Zen koan = “Shock the listener out of their established thought patterns.” Makes them think about things in new ways.

“We are building structures inside the player’s mind.”

40 years of game design and we still have no universally accepted standards for game design. No template. No vernacular.

Use AI for profiling and testing game mechanics.

Computational Equivalence – what are the ways to show some sort of similarity between how the human mind functions and how game AI functions.

When we play a game at a high skill level, we use discrete mental structures for each type of task. In our brains, we combine planners, HFSMs, BTs, etc.

Example: Constructing a BT out of player’s play patterns.

  1. Take a high level game player.
  2. Record the game play
  3. Analyze the game play to identify patterns
  4. Find appropriate structure that models the patterns

What does doing this get you? What can we identify?

  • Task has too many children. Giving the player too many options? Lots of overlap between similar things.
  • Task has too few children. Haven’t given the player the tools to do a certain thing. Also, might get boring if you only have one option to deal with a particular situation
  • Orphaned actions. Why isn’t the player using it? Perhaps refactor it to make it more useful?
  • Action is overrepresented. Perhaps refactor it to make it less useful.
  • Insufficiently differentiated actions (similar stuff)
  • Build challenges that ensure that every part of the tree is exercised. Set game up to make sure they have to use all abilities.
  • Identify disjoint branches. Something that doesn’t have anything to do with the rest of the tree. Is this branch really part of the game? Why doesn’t it share anything with the rest of the tree/game?
  • Define skill progression sequence. What can you add/upgrade to different parts of the tree?
  • Context for defining and differentiating character classes and archetypes
  • Identify where to break the player’s decision-making structure. Give exceptions so that the gameplay stays fresh.

How do we engage more of the player’s mind?

Player must do something similar to pathfinding:

Cognitive challenge of navigating the environment. Make the environment move (platformer), parallel worlds, dimensionality, reverse the flow of time (Braid), change the topology of the world (Portal).

Planning in a dynamic world in tactics/strategy. Player must do something similar to influence maps. Change the topology of the world to change the way the player must approach the influence map. Star systems is over network topology rather than over a 2D grid.

Classifiers

Using inputs to make decisions.

Players don’t use specific decision tree structure but rather as a list of rules. (i.e. rule based system or expert system) ID3, C4.5 can take raw data and construct a decision tree.

“Select Target” is great example of classifier. What are all the inputs that we would use to determine which target we would attack when we switch from multiple targets to single target in the WoW BT example above.

Emergent Gameplay

Emergent gameplay refers to complex situations in a video game that emerge from the interaction of relatively simple game mechanics.” – Wikipedia

  • Give players lots of “verbs”
  • Present obstacles to the players
  • Let players be creative to solve the obstacles

Emergent possibilities can be overwhelming on the designers to test all the possible uses of the verbs.

“If you haven’t tested it, you have no way of knowing whether it will cause fun or frustration.”

So why is emergent gameplay fun? Because it is all about planning – which is a cognitive challenge that works our mind. It is just like the various forms of path planning – just not through the geometry of the world, but rather through the possibility space.

Use a planner to simulate all the possible ways of chaining verbs together. What can the player possibly discover? By creating the verbs together in a planner sort of way, we can let the AI simulate the solving of puzzles.

Just like game tree, but drawn as a graph because there are multiple paths to any given game state.

Don’t enumerate the entire game tree. Do it by sections first.

Competitive planning (e.g. TF2). Each team is doing their own plan. The cognitive challenge is not just coming up with your own (team’s) plan but rather matching your plan to that of the opposing team as well. What is the zone of control between the two plans? It’s similar to the star system topology. Building an influence map over the state space topology.

The engagement of TF2 is based on the complexity of the two teams struggling against each other in that massive, dynamic state space.

Conclusion

If we can build structures in the player’s mind, we can use our AI structures to analyze them.

What kind of cognitive challenges can we create? AI gives us answers around which we can design our experiment.

Think about how you make decisions. What is the structure? What are the weights? How would you design the AI for what you do in your everyday life? What parts of your mind are NOT being used?

Post-Conference Reflection

Paul’s presentation was interesting in that it was using our knowledge and understanding of classic game AI algorithms and techniques to expose pros and cons of game design. Rather than talking about how to construct behaviors of game characters, he was using game AI to construct an analogue of the behaviors of a game player.

While the knowledge that this exposes is important, I enjoyed his talk for another reason. I felt that the exercise of putting one’s own actions into the framework of AI is something that more designers and programmers need to do. In fact, it was my excitement in this area that led Paul to add his closing entreaty to the audience… “analyze your own decisions – how would you design an AI algorithm for what you just did or are about to do.” I think that his example (the BT of his WoW play) was an excellent example of walking through this process.

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content 2002-2010 by Intrinsic Algorithm L.L.C.

OGDA