IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘agent based AI’

Flanking and Cover and Flee! Oh my!

Monday, June 13th, 2011

I was browsing through my Google alerts for “Game AI” and this jumped out at me. It was a review of the upcoming Ghost Recon: Future Soldier on Digital Trends (who, TBH, I hadn’t heard of until this). The only bit about the AI that I saw was the following paragraph:

The cover system is similar to many other games, but you can’t use it for long. The environments are partly destructible, and hiding behind a concrete wall will only be a good idea for a few moments. The enemy AI will also do everything it can to flank you, and barring that, they will fall back to better cover.

There is a sort of meta-reason I find this exciting. First, from a gameplay standpoint, having enemies that use realistic tactics makes for a more immersive experience. Second, on the meta level is the fact that this breaks from a meme that has been plaguing the industry for a while now. Every time someone suggested that enemies could‚ÄĒand actually should‚ÄĒflank the player, there was a rousing chorus of “but our players don’t want to be flanked! It’s not fun!”

“but our players don’t want to be flanked!”

This mentality had developed a sort of institutional momentum that seemed unstoppable for a while. Individuals, when asked, thought it was a good idea. Players, when blogging, used it as an example of how the AI was stupid. However, there seemed to be a faceless, nebulous design authority that people cited… “it’s not how we are supposed to do it!”

What are we supposed to do?

One of the sillier arguments I heard against having the enemy flank the player and pop him in the head is that “it makes the player mad”. I’m not arguing against the notion that the player should be mad at this… I’m arguing against the premise that “making the player mad” is altogether unacceptable.

“…it makes the player¬†mad.”

In my lecture at GDC Austin in 2009 (“Cover Me! Promoting MMO Player Interaction Through Advanced AI” (pdf 1.6MB), I pointed out that one of the reasons that people prefer to play online games against other people is because of the dynamic, fluid nature of the combat. There is a constant ebb and flow to the encounter with a relatively tight feedback loop. The enemy does something we don’t expect and we must react to it. We do something in response that they don’t expect and now they are reacting to us. There are choices in play at all times… not just yours, but the enemy’s as well. And yes, flanking is a part of it.

It builds tension in my body that is somewhat characteristic of combat.

In online games, if I get flanked by an enemy (and popped in the head), I get mad as well… and then I go back for more. The next time through, I am a little warier of the situation. I have learned from my prior mistake and am now more careful. It builds tension in my body that, while having never been in combat, I have to assume is something that is somewhat characteristic of it. Not knowing where the next enemy is coming from is a part of the experience. Why not embrace it?

Something to fall back on…

One would assume some level of self-preservation in the mind of the enemy…

The “fall-back” mechanic is something that is well-documented through Dami√°n Isla’s lectures on Halo 3. It gives a more realistic measure of “we’re winning” than simply mowing through a field of endless enemies. Especially in human-on-human combat where one would assume some level of self-preservation in the mind of the enemy, having them fall back instead of dying mindlessly is a perfect balance between the two often contradictory goals of “survival” and “achieving the goal”. It is this balance that makes the enemy feel more “alive” and even “human”.

If enemies simply fight to the death, the implication is that “they wanted to die“.

Often, if enemies simply fight to the death, the implication is that “they wanted to die”. Beating them, at that point, is like winning against your dad when you were a kid. You just knew that he was letting you win. The victory didn’t feel as good for it. In fact, many of us probably whined to our fathers, “Dad! Stop letting me win! I wanna win for real!” Believe it or not, on a subconscious level, this is making the player “mad” as well.

They want to win but you are making them choose to live instead!

By given our enemies that small implication that they are trying to survive, the player is given the message that “you are powerful… they want to win but you are making them choose to live instead!”

Here’s hoping that we can actually move beyond this odd artificial limitation on our AI.

Fritz Heckel’s Reactive Teaming

Tuesday, May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Dami√°n Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way.¬†I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

Choices: Not Just for Players Anymore

Friday, April 10th, 2009

In a recent opinion article by James Portnow entitled¬†The Problem Of Choice, the idea was posited that there are two types of decisions that a player can be faced with in a game: “problems” and”choices”. The former is something that involves a “right answer” such as a mathematically optimal solution. Therefore, theoretically it can be solved. We are all familiar with such challenges ingames — especially when designers make them all too transparent.The other type of decision is the “choice”. These are a little more amorphous in that there is no “right answer”. Games such as¬†Bioshock¬†(i.e. the Little Sisters) have these elements but others such as¬†Black & White¬†and the two¬†Fable¬†games are rife with them. In fact, the entire game mechanic is built upon the idea of “make a choice and change the whole experience.”

While I agree with the excellent points that James made, I believethat this same mentality can be extended to the realm of AI as well. In fact, Imade this point in my lecture, Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters at the AI Summit at GDC a few weeks ago.  Specifically, I suggested that the incorporation of differences between characters can enable game design choices for us as developers which, in turn, enables gameplay choices for our audience. However, it is not simply the incorporation of personality, mood, and emotion that does this. It is often even simpler than that.

As programmers, we deal in a world of algorithms. Algorithms are, by definition, a series of steps designed to solve a particular problem. Even the ubiquitous yet humble A* pathfinding algorithm is sold as guaranteeing to”return the¬†shortest path¬†to a goal if a path exists.” The emphasis is mine. It returns the shortest path — the best decision. Now that we are using A* for other uses such as sifting through planning algorithms todecide on non-path-related actions, we are subscribing to the same approach. What is¬†the best¬†action I can take at this time? Unfortunately, that leads our AI agents along the same path as the player… “how can I¬†solve¬†this game?” The simple fact that our agents are looking for the¬†one¬†solution necessarily limits the variety and depth that they are capable of exhibiting.¬†

The irony involved here is that, in designing things this way, we cause our agents to approach something that¬†should¬†be a choice (as defined by Portnow) and turn it into a problem (i.e. somethingthat can be solved). Whether there is any “best” decision or not, our agents believe that there is… “belief”¬†in this case coming in the form of whatever decision algorithm we happened to design into their little brains.

The solution to this is not necessarily technical. It is more of a willingness by designers and AI programmers to allow our agents to either

a) not make the “best” decision all the time, or

b) include decisions in the design to which there is no”best” solution at all.

Unfortunately, we have established a sort of industry meme that “we can’t allow our agents to do something that is not entirely predictable”. We are afraid of losing control. Here’s a startling tip, however… if¬†we¬†can predict what our agents are going to do,¬†so can our players!¬†And I nominate predictability as one of worst air leaks in the tires of replayability.

One of the quotes that I used in that lecture and in my book on behavioral AI is from Sid Meier who suggested that a game is “a series ofinteresting choices”. It is a natural corollary that in order for the player to make interesting choices, he needs interesting options (not math problems). One of the ways that we can present those interesting options is to¬†allow¬†our agents to make interesting choices (not solve problems) as well.

Gamasutra Article on Intelligent Mistakes

Wednesday, March 18th, 2009

There’s a nice, if incomplete, article on Gamasutra today by Neversoft’s Mick West titled Intelligent Mistakes: How to Incorporate Stupidity Into Your AI Code. It’s not a new subject, certainly, but what caught my eye was the fact that he used the game of Poker as one of his examples.

In the first chapter of my book, “Behavioral Mathematics for Game AI,” I actually use Poker as a sort of “jumping off point” for a discussion on why decision-making in AI is an important part of the game design. I compare it to games such as Tic-Tac-Toe where the only decision is “if you want to not lose, move here,” Rock-Paper-Scissors where (for most people) the decision is essentially random, and Blackjack where your opponent (the dealer) has specific rules that he has to play by (i.e. < 17 hit, > 17 stand).

Poker, on the other hand (<- that's a joke), is interesting because your intelligent decisions have to incorporate the simple fact that the other player(s) is making intelligent decisions as well. It's not enough to look at your own hand and your own odds and make a decision from that. You have to take into account what the other player is doing and attempt to ferret out what he might be thinking. Conversely, your opponent must have the same perceptions of you and what it is you might be thinking and, therefore, holding in your hand.

Thankfully, there is no “perfect solution” that a Poker player can follow due to the imperfect information. However, in other games, there are “best options” that can be selected. If our agents always select the best options, not only do we run the risk of making them too difficult (“Boom! Head shot!) but they also all tend to look exactly the same. After all, put 100 people in the exact same complex situation and there will be plenty of different reactions. In fact, very few of them will choose what might be “the best” option. (I cover this extensively in Chapter 6 of my book, “Rational vs. Irrational Behavior.”

Our computerized AI algorithms, however, excel greatly at determining “the best” something. Whether it be an angle for a snooker shot (the author’s other example), a head shot, or the “shortest path” as determined by A*. After all, that is A*’s selling point… “it is guaranteed to return the shortest path if one exists.” Congratulations… you inhumanly perfect. Therefore, as the author points out in his article, generating intelligent-looking mistakes is a necessary challenge. Thankfully, in later chapters of “Behavioral Mathematics…” I propose a number of solutions to this problem that can be easily implemented.

Anyway, I find it an interesting quandary that we have to approach behavior from both sides. That is, how do we make our AI more intelligentand, how do we make our AI less accurate? Kind of an odd position to be in, isn’t it?

Behavioral Mathematics for Game AI description

Tuesday, February 10th, 2009


Some of you have asked me questions about my new book, “Behavioral Mathematics for Game AI.” The information that is on amazon and other retailers is not up to date. For one, there is no cover picture. There’s a good reason for that. The cover is still being laid out and will be uploaded to the resellers soon. In the mean time, here’s the cover art that I developed (click for a nicely detailed larger version).

The description text that you see on the resellers’ sites is based on what I originally submitted to my publisher when I began the writing process. Things have changed a little bit since then. The following is likely what we are putting on the back of the book and will eventually be updated on Amazon, etc.

Human behavior is never an exact science. As a result, the design and programming of artificial intelligence that seeks to replicate human behavior is already an uphill battle. Usually, the answers can not be found in sterile algorithms that are often the focus of artificial intelligence programming. However, by analyzing why people humans (and other sentient beings) behave the way we do, we can break the process down into increasingly smaller components. We can model many of those individual components in the language of logic and mathematics and then reassemble them into larger, more involved decision-making processes.

Drawing from classical game theory, this book covers both the psychological underpinnings of human decisions and the mathematical modeling techniques that AI designers and programmers can use to replicate them. With examples from both “real life” and game situations, the author explores topics such as the fallacy of “rational behavior,” utility, and the inconsistencies and contradictions that human behavior often exhibits. Readers are shown various ways of using statistics, formulas, and algorithms to create believable simulations and to model these dynamic, realistic, and interesting behaviors in video games.

Additionally, the book introduces a number of tools that the reader can use in conjunction with standard AI algorithms to make it easier to utilize the mathematical models. Lastly, the programming examples and mathematical models shown in the book are downloadable, allowing the reader to explore the possibilities in their own creations.

If you have any other questions about the book, feel free to ask! (And yes, it should be available at the Charles River Media booth at the Game Developers Conference next month. I’ll be there quite a bit… flag me down!)

Behavioral Mathematics for Game AI

Wednesday, January 28th, 2009

For those of you that have been wondering why this blog has been a little quiet in the past 6 months or why I haven’t progressed on Airline Traffic Manager, I present to you my excuse…

My new book, Behavioral Mathematics for Game AI, is now available for pre-order on Amazon.com (and dozens of other resellers). Some of the details on the site are incorrect since the book is being finished up. As my publisher (Charles River Media/Cengage/Course PTR) updates their information (including art for a cover that we haven’t even designed yet), the information at the resellers will be updated as well.

As it is scheduled now, the book will be available in mid-March. Thankfully, that means that it will also be on the shelf in time for GDC. Considering my role in the inaugural AI Summit, that will make for an exciting week for me.

Rather than the typical AI book which deals with the architecture of state machines and pathfinding, for example, I focus on behavioral AI. That is, what goes into a decision and how can we make those decisions more believable?

I begin by covering a lot of classical game and decision theory and explain how we can relate those examples into the video game world. I cover utility theory and marginal utility, how it manifests in the real world and how we can represent it in our AI agents. There’s a healthy dose of technical material as well. There are chapters on constructing mathematical formulas and algorithms to help replicate behavioral patterns. Lastly, I offer solutions to prevent some of the problems that hamstring our AI agents. Throughout all of the book, I tried to make it an easy and even fun read. How often have you read a technical book that would keep you awake chuckling rather than put you to sleep with a headache? After all, we do program games, right?

All in all, it has been an enjoyable project that forced me to re-analyze things that we all often take for granted as AI programmers. I hope that you enjoy it as well.

AIGameDev Column: The Art of AI Sidekicks

Tuesday, June 3rd, 2008

Batman and Robin make an appearance in my Developer Discussion column at AIGameDev.com this week. In The Art of AI Sidekicks:Making Sure Robin Doesn’t Suck, I touch on the recent shift towards providing consistent, engaging sidekicks for the player. Certainly there are unique challenges in an AI agent that is so ever-present. To lift a segment from the column:

If we are going to have an AI that’s tagging along behind us for hours on end, wouldn’t it be better for us to love him/her/it? Let’s face it, if you are playing 10 or 20 hours of game content, any form of repetitive AI may have you digging through the manual for scouring cheat codes online in order to find the “slap your sidekick upside the head” control. You can’t simply get away with seven seconds… or even 5 minutes of believable behavior. Beyond that, the sidekick needs to be more than just something you are entertained and amused by. You need to be able to depend on it… as if it were your lifelong partner.

The ensuing discussion spurred many great comments. Take a gander at it and chime in with your opinion (or a solution?).

An FPS AI Call To Action

Wednesday, April 23rd, 2008

I was pointed to an interesting little observation/rant/commentary on the AI of FPS games on Blogo Profundo. Just a snippet heavily trimmed down to his main points:

I’m no expert here, so maybe one of my forty thousand readers can chime in, but these are just some problems I’ve noticed.

… Over the years, FPS’s have definitely evolved both in graphics and
mission complexity. … One thing that has always been subpar however is the
computer characters’ AI.

… First criticism is that they tend to act as solitary agents, and don’t
usually form up into squads and act tactically like a rifle-team. There’s
no covering fire, there’s no flanking, there’s no suppression and calling in
air-strikes.

… Second criticism is that even when they are acting as solitary agents,
they’re dumb as hell. They usually don’t hear you, they don’t make basic
predictions about what you’re likely to do, and they don’t use the terrain to
their advantage.

There’s more to it than that. In fact, he asks some excellent questions of the development community. Certainly, the suggested solutions are a bit simplistic but it does show what I believe is the consensus of our audience… that they are grumpy about our lack of progress and/or dedication to realistic AI.

We hear you, man… we’re working on it!

Writing AI is Like Parenting

Sunday, April 6th, 2008

Ted Vessenes wrote a nifty little post on his blog where he compared designing and programming AI to being a parent. Here’s the opening paragraph:

“Writing artificial intelligence is a lot like being a parent. It requires an unbelievable amount of work. There are utterly frustrating times where your children (or bots) do completely stupid things and you just can’t figure out what they were thinking. And there are other times they act brilliantly, and all the effort feels satisfying and well spent.”

I have to agree with a lot of the points he makes in his post. I would like to take the analogy one step farther.

I’ve occasionally made the point about both parenting and AI that your job is to not define what your progeny should do but convey an understanding of why. If, as a parent, you tell your child not to run in the street, they will hopefully carry that lesson into the future. However, they may not apply that same edict to driveways, parking lots or any other places where they could get plowed over by a car. This is analogous to the scripted AI methodology. However, if you explain the why of the situation – i.e. “be careful anywhere that cars are moving because the driver may not see you in time to stop and you could get badly hurt” – then the simple rule can be applied to any situation where there are cars (or even car-like objects). This, of course, maps over to rule-based systems or even planning systems.

However, going back to Ted’s point, it is an interesting similarity to put all those rules into place and hope that your little bots realize the appropriate situations in which to use them. I actually wrote a column about this scary process on my weekly column over at AIGameDev.

Anyway, if you are an AI developer, I hope that you are blessed with many children who all grow up to be accomplished in their chosen lives (or deaths).

F.E.A.R. sequel promises "visual density"

Wednesday, January 30th, 2008

I noticed this GamePro blurb about the upcoming sequel to F.E.A.R. Here’s an excerpt…

“The most obvious difference that will hit the player right away is in the visual density of the world,” said Mulkey. “F.E.A.R. looked really great, but where F.E.A.R. would have a dozen props in a room to convey the space, Project Origin will have five times that much detail.

“Of course, this will only serve to further ratchet up that ‘chaos of combat’ to all new levels with more breakables, more debris, more stuff to fly through the air in destructive slow motion beauty.”

OK… I can dig that. One thing I noticed as I played through F.E.A.R. is that things were kinda sparse. (I really got tired of seeing the same potted cactus, too.)

The part that I am curious about, however is this:

… Mulkey says improved enemy behavior is at the top of the list.

“We are teaching the enemies more about the environment and new ways to leverage it, adding new enemy types with new combat tactics, ramping up the tactical impact of our weapons, introducing more open environments, and giving the player the ability to create cover in the environment the way the enemies do,” he says.

Now that is the cool part. When the enemies in the original moved the couches, tables, bookshelves, etc. it was cool… but rather infrequent. I was always expecting them to do more with it. If they are both adding objects to the environment and then “teaching” the agents to actually use those objects, we may see a level of environment interactivity that we’ve never experienced before.

The cool thing about their planning AI structure is that there isn’t a completely rediculous ramp-up in the complexity of the design. All one needs do is tag an object that it can be used in a certain way and it gets included into the mix. On the other hand, having more objects to use and hide behind does increase the potential decision space quite a bit. It’s like how the decision tree in chess is far greater than that of Tic-tac-toe because there are so many more options. The good news is that the emergent behavior level will go through the roof. The bad news is that it will hit your processor pretty hard. Expect the game to be a beast to run on a PC.

I certainly am looking forward to mucking about with this game!

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content ©2002-2010 by Intrinsic Algorithm L.L.C.

OGDA