IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


What Real Wolves Can Teach Us about Our AI

October 27th, 2011

An article was recently brought to my attention. The first one I looked at was Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.

The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,

Using a computer model, researchers had each virtual “wolf” follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.

The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980’s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.

The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.

We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.

Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.

So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!


AI Game Programmers Guild

October 15th, 2011

About a month or two ago, I launched something that I did not bother to mention here. Mostly, that is because I have been fairly busy with other things. Also, I have found that most of the small announcements that I have to make, I have been doing on Twitter. However, realizing that there are people who may subscribe to this blog, or visit in the future, that may not follow me on Twitter, I figured it was worth a mention here.

Over three years ago, Steve Rabin and I formed the AI Game Programmers Guild. However, it was only a few months ago that we decided that we needed to put up a web site. Actually, to be more accurate, we decided to put out a web site over a year ago. We just never got around to it. Thankfully, our good friend Steve Woodcock decided that he was no longer going to be able to keep up his website. He also had a really nifty domain name: GameAI.com. We initiated a transfer of the domain so that I could start taking care of it. All I then set about setting up the new web site for the guild.

The guild web site, has many links to all sorts of information. We are hoping to continue to expand that over the years. In particular, links to people’s presentations at places such as GDC will be hosted on there. Also, links two papers on artificial intelligence, especially in the game industry. There is a news feed on that site as well where I will announce as we enter more information over the course of time. When appropriate, I will try to remember to cross-post on this blog as well.


Flanking and Cover and Flee! Oh my!

June 13th, 2011

I was browsing through my Google alerts for “Game AI” and this jumped out at me. It was a review of the upcoming Ghost Recon: Future Soldier on Digital Trends (who, TBH, I hadn’t heard of until this). The only bit about the AI that I saw was the following paragraph:

The cover system is similar to many other games, but you can’t use it for long. The environments are partly destructible, and hiding behind a concrete wall will only be a good idea for a few moments. The enemy AI will also do everything it can to flank you, and barring that, they will fall back to better cover.

There is a sort of meta-reason I find this exciting. First, from a gameplay standpoint, having enemies that use realistic tactics makes for a more immersive experience. Second, on the meta level is the fact that this breaks from a meme that has been plaguing the industry for a while now. Every time someone suggested that enemies could—and actually should—flank the player, there was a rousing chorus of “but our players don’t want to be flanked! It’s not fun!”

“but our players don’t want to be flanked!”

This mentality had developed a sort of institutional momentum that seemed unstoppable for a while. Individuals, when asked, thought it was a good idea. Players, when blogging, used it as an example of how the AI was stupid. However, there seemed to be a faceless, nebulous design authority that people cited… “it’s not how we are supposed to do it!”

What are we supposed to do?

One of the sillier arguments I heard against having the enemy flank the player and pop him in the head is that “it makes the player mad”. I’m not arguing against the notion that the player should be mad at this… I’m arguing against the premise that “making the player mad” is altogether unacceptable.

“…it makes the player mad.”

In my lecture at GDC Austin in 2009 (“Cover Me! Promoting MMO Player Interaction Through Advanced AI” (pdf 1.6MB), I pointed out that one of the reasons that people prefer to play online games against other people is because of the dynamic, fluid nature of the combat. There is a constant ebb and flow to the encounter with a relatively tight feedback loop. The enemy does something we don’t expect and we must react to it. We do something in response that they don’t expect and now they are reacting to us. There are choices in play at all times… not just yours, but the enemy’s as well. And yes, flanking is a part of it.

It builds tension in my body that is somewhat characteristic of combat.

In online games, if I get flanked by an enemy (and popped in the head), I get mad as well… and then I go back for more. The next time through, I am a little warier of the situation. I have learned from my prior mistake and am now more careful. It builds tension in my body that, while having never been in combat, I have to assume is something that is somewhat characteristic of it. Not knowing where the next enemy is coming from is a part of the experience. Why not embrace it?

Something to fall back on…

One would assume some level of self-preservation in the mind of the enemy…

The “fall-back” mechanic is something that is well-documented through Damián Isla’s lectures on Halo 3. It gives a more realistic measure of “we’re winning” than simply mowing through a field of endless enemies. Especially in human-on-human combat where one would assume some level of self-preservation in the mind of the enemy, having them fall back instead of dying mindlessly is a perfect balance between the two often contradictory goals of “survival” and “achieving the goal”. It is this balance that makes the enemy feel more “alive” and even “human”.

If enemies simply fight to the death, the implication is that “they wanted to die“.

Often, if enemies simply fight to the death, the implication is that “they wanted to die”. Beating them, at that point, is like winning against your dad when you were a kid. You just knew that he was letting you win. The victory didn’t feel as good for it. In fact, many of us probably whined to our fathers, “Dad! Stop letting me win! I wanna win for real!” Believe it or not, on a subconscious level, this is making the player “mad” as well.

They want to win but you are making them choose to live instead!

By given our enemies that small implication that they are trying to survive, the player is given the message that “you are powerful… they want to win but you are making them choose to live instead!”

Here’s hoping that we can actually move beyond this odd artificial limitation on our AI.


Pondering the Purpose of E3

June 7th, 2011
“can I see the
gameplay, please?”

I’m about to leave my hotel for my daily drive into the LA Convention Center. I’ve been watching the prodigious flow of information on Twitter (and, by proxy, elsewhere on the internet) the past 36 hours now — the press conferences, announcements, etc. — and I’ve noticed the yearly trend occurring once again. Juxtaposed alongside the slobbering exclamations of “that looks so cool” are the more cynical comments along the lines of “gee thanks for showing us a cinematic trailer… can I see the gameplay, please?” These comments are well-deserved. After all, we are selling games, right?

For movies, doing a trailer is a touchy thing… you want people to understand the premise of the movie but you can’t venture into the territory of spoilers. The longer you make the trailer video, the closer you get to recreating the entire movie. One could claim that for some bad movies (esp. certain action flicks or romantic comedies), all the highlights are in the trailer and the rest is simply filler.

For games, however, people don’t want to know simply the premise. They want to know how that premise is going to be interacted with. Some things have become so standard that the description of the genre is all we need. For example, is it going to be 1st person, 3rd person, shooter, RPG, turn-based, real-time, etc. Those are terms we all know and can somewhat infer other aspects from. Still, we would like to see something other than the equivalent of a movie preview. What am I going to be doing? What does it look like?

“I really don’t like playing game demos.”

So… off we go to the expo floor where we can actually do the “hands-on” play, right? But here’s where I really start to diverge from the masses that descend upon the LACC every year. I really don’t like playing game demos. I get very little out of them. There… I said it. So sue me.

Perhaps some of it is due to the fact that the types of games I like involve components that are not tractable to experience in a short time. I like to have an exploration of game mechanics. Button mashing doesn’t do it. I like more cerebrally engaging tactics and strategy. You can’t do that in 15 minutes on the expo floor. I like character development. At E3, I can learn less about a character than I learned about the person in my hotel elevator on the way down to breakfast just now. And, given my line of work, I like detailed, realistic, character AI. At least this much I can gather from watching over people’s shoulders — which is usually how I elect to experience the E3 show floor.

“You can’t do that in 15 minutes on the expo floor.”

This is obviously something that is simply endemic to the delivery medium. Much like the movie trailers, the only way to “get” some games would be to play them entirely. The more that people play the game demo, the less of the actual game remains. At some point you have to put down the controller and go on faith. But how long is long enough? And given the number and variety of titles on the show floor, how long to dedicate to each one? I suppose that’s why many people who come to E3 have only a select few things they want to see and play. Unfortunately, to me, that seems to be missing out on some of the point of E3.

Unfortunately, it seems that a parallel between movie trailers and game demos such as those found at E3 are that they both cater better to the shallower fare. The shallower a movie is, the better “feel” you can get for the whole thing in a trailer. For more depth on whether or not it is good, you have to read reviews, synopses, etc. The same for a game. If all you are after is stunts and explosions (in either medium) then a trailer/demo will let you know what you’re in for.

“It becomes a big psychological game for me.”

In my strategy, I try to look at a little of everything. I want to take the pulse of the industry… not just the giant titles from the mammoth houses but the small off-brand products as well. What are the little games that people are making? And why? What are the niches that are out there? Can I mentally predict who is going to fail? Do some of the titles strike me as being completely and utterly useless? And is that a function of the title or that I simply don’t like it? If it is me that is out of sync, can I step outside myself and determine what kind of player this title would be for? It becomes a big psychological game for me.

My kids have asked me over the years, “dad, wouldn’t it be cool to have your game at E3?” To explain, they mean my current labor of love, Airline Traffic Manager. Their blind pride for their father aside, this is simply not going to happen. Even if I made the best airline management simulation on the planet (which I aim to do), it is simply not something that is even possible for people to “get” in the hors d’œuvre-sized morsels that E3 is meant to serve.

Or maybe it will be on the floor some day… and someone will walk by and ake, “I wonder what kind of market and airline management sim would appeal to?”


2011 GDC AI Summit Sessions Posted

January 12th, 2011

GDC AI SummitAll 12 of the sessions for the 2011 GDC AI Summit have been posted on the Game Developers Conference site. Once again, the AI Summit features a staggering array of content from over 20 AI developers.

The full list of sessions can be found here. The schedule will be arranged in the next week or so.

This year, GDC is February 28th through March 4th. Make sure you get registered soon!


Sun Tzu as a Design Feature?

June 5th, 2010

Total War creator, The Creative Assembly, has announced the development of the latest in the line of acclaimed RTS games, Shogun 2. While the Total War franchise has a 10-year history and is fairly well-known for its AI,  this blurb from their web site has spread through the web like an overturned ink well:

Featuring a brand new AI system inspired by the scriptures that influenced Japanese warfare, the millennia old Chinese “Art of War”, the Creative Assembly brings the wisdom of Master Sun Tsu to Shogun 2: Total War. Analysing this ancient text enabled the Creative Assembly to implement easy to understand yet deep strategical gameplay.

Sun Tzu‘s “The Art of War” has been a staple reference tome since he penned it (or brushed it… or whatever) in the 6th century B.C. It’s hard to find many legends that have made it for over 20 centuries. Its applications have been adapted in various ways to go beyond war to arenas such as business and politics. Suffice to say that “The Art of War” lives on as “things that just make sense”.

The problem I have here is that this seems to be more of a marketing gimmick than anything. After all, most of what Sun Tzu wrote should, in various forms, already be in game AI anyway.  To say Sun Tzu’s ideas are unique to him and would never have been considered without his wisdom is similar to saying that no one thought that killing was a bad idea until Moses wandered down the hill with “Thou Shalt Not Kill” on a big ol’ rock. No one stood around saying, “Gee… ya think?” Likewise, Sun Tzu’s advice about “knowing your enemy” is hardly an earth-shattering revelation.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War. Things like “staying in cover and waiting for the enemy to attack you” come to mind. Of course, in the game world, we call that “camping” (as an individual) or “turtling” (as a group). I can imagine a spirited argument as to whether a camping/turtling AI is necessarily What Our Players Want™, however. It certainly beats the old “Doom model” of “walk straight towards the enemy”.

And what about the Sun Tzu concept of letting your two enemies beat the snot out of each other before you jump in? (I believe there are translations that yielded “dog shit” rather than “snot” but the meaning is still clear.) If you are in an RTS and one enemy just sits and waits for the other one whack you around a little bit, it’s going to look broken. On the other hand, I admit to doing that in free-for-all Starcraft matches… because it is a brutal tactic!

The problem I have with their claim is that we already do use many of his concepts in game AI.

The problem I have with their claim, however, is that there are many concepts in the Art of War that we already do use in game AI. By looking at Sun Tzu’s chapter headings (or whatever he called them) we can see some of his general ideas:

For ease of reference, I pillage the following list from Wikipedia:

  1. Laying Plans/The Calculations
  2. Waging War/The Challenge
  3. Attack by Stratagem/The Plan of Attack
  4. Tactical Dispositions/Positioning
  5. Energy/Directing
  6. Weak Points & Strong/Illusion and Reality
  7. Maneuvering/Engaging The Force
  8. Variation in Tactics/The Nine Variations
  9. The Army on the March/Moving The Force
  10. The Attack by Fire/Fiery Attack
  11. The Use of Spies/The Use of Intelligence

Going into more detail on each of them, we can find many analogues to existing AI practices:

Laying Plans/The Calculations explores the five fundamental factors (and seven elements) that define a successful outcome (the Way, seasons, terrain, leadership, and management). By thinking, assessing and comparing these points you can calculate a victory, deviation from them will ensure failure. Remember that war is a very grave matter of state.

It almost seems to easy to cite planning techniques here because “plans” is in the title. I’ll go a step further then and point out that the practice of collecting information and assessing the relative merits of the selection, you can determine potential outcomes or select correct paths of action. This is a common technique in AI decision-making calculations. Even the lowly min/max procedure is, in essence simply comparing various potential paths through the state space.

Waging War/The Challenge explains how to understand the economy of war and how success requires making the winning play, which in turn, requires limiting the cost of competition and conflict.

This one speaks even more to the min/max approach. The phrase “limiting the cost of competition and conflict” expresses the inherent economic calculations that min/max is based on. That is, I need to get the most bang for my buck.

Attack by Stratagem/The Plan of Attack defines the source of strength as unity, not size, and the five ingredients that you need to succeed in any war. In order of importance attack: Strategy, Alliances, Army, lastly Cities.

Any coordinating aspects to the AI forces falls under this category. For example, the hierarchical structure of units into squads and ultimately armies is part of that “unity” aspect. Very few RTS games send units into battle as soon as they are created. They also don’t go off and do their own thing. If you have 100 units going to 100 places, you aren’t going to have the strength of 100 units working as a collection.  This has been a staple of RTS games since their inception.

Tactical Dispositions/Positioning explains the importance of defending existing positions until you can advance them and how you must recognize opportunities, not try to create them.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”. More importantly, individual or squad tactics that do leapfrogging, cover-to-cover, movement is something that has been addressed in various ways for a number of years. Not only in FPS games do we see this (e.g. F.E.A.R.), but even in some of the work that Chris Jurney did originally in Company of Heroes. Simply telling a squad to advance to a point didn’t mean they would continue on mindless of their peril. Even while not under fire, they would do a general cover-to-cover movement. When engaged in combat, however, there was a very obvious and concerted effort to move up only when the opportunity presented itself.

This point can be worked in reverse as well. The enemies in Halo 3, as explained by Damián Isla in his various lectures on the subject, defend a point until they can no longer reasonably do so and then fall back to the next defensible point. This is a similar concept to the “advance” model above.

Suffice to say, whether it be advancing opportunistically or retreating prudently, this is something that game AI is already doing.

Energy/Directing explains the use of creativity and timing in building your momentum.

This one is a little more vague simply because of the brevity of the summary on Wikipedia. However, we are all well aware of how some games have diverged from the simple and stale “aggro” models that were the norm 10-15 years ago.

Weak Points & Strong/Illusion and Reality explains how your opportunities come from the openings in the environment caused by the relative weakness of your enemy in a given area.

Identifying the disposition of the enemy screams of influence mapping…

Identifying the disposition of the enemy screams of influence mapping—something that we have been using in RTS games for quite some time. Even some FPS and RPG titles have begun using it. Influence maps have been around for a long time and their construction and usage are well documented in books and papers. Not only do they use the disposition of forces as suggested above, but many of them have been constructed to incorporate environmental features as Mr. Tzu (Mr. Sun?) entreats us to do.

Maneuvering/Engaging The Force explains the dangers of direct conflict and how to win those confrontations when they are forced upon you.

Again, this one is a bit vague. Not sure where to go there.

Variation in Tactics/The Nine Variations focuses on the need for flexibility in your responses. It explains how to respond to shifting circumstances successfully.

This is an issue that game AI has not dealt with well in the past. If you managed to disrupt a build order for an RTS opponent, for example, it might get confused. Also AI was not always terribly adaptive to changing circumstances. To put it in simple rock-paper-scissors terms, if you kept playing rock over and over, the AI wouldn’t catch on and play paper exclusively. In fact, it might still occasionally play scissors despite the guaranteed loss to your rock.

Lately, however, game AI has been far more adaptive to situations. The use of planners, behavior trees, and robust rule-based systems, for example, has allowed for far more flexibility than the more brittle FSMs allowed for. It is much harder to paint an AI into a corner from which it doesn’t know how to extricate itself. (Often, with the FSM architecture, the AI wouldn’t even realize it was painted into a corner at all and continue on blissfully unaware.)

The Army on the March/Moving The Force describes the different situations inf them.

[editorial comment on the above bullet point: WTF?]

I’m not sure to what the above refers, but there has been a long history of movement-based algorithms. Whether it be solo pathfinding, group movement, group formations, or local steering rules, this is an area that is constantly being polished.

The Attack by Fire/Fiery Attack explains the use of weapons generally and the use of the environment as a weapon specifically. It examines the five targets for attack, the five types of environmental attack, and the appropriate responses to such attack.

For all intents and purposes, fire was the only “special attack” that they had in 600 BC. It was their BFG, I suppose. Extrapolated out, this is merely a way of describing when and how to go beyond the typical melee and missile attacks. While not perfect, actions like spell-casting decisions in an RPG are not terribly complicated to make. Also, by tagging environmental objects, we can allow the AI to reason about their uses. One excellent example is how the agents in F.E.A.R. would toss over a couch to create a cover point. That’s using the environment to your advantage through a special (not typical) action.

The Use of Spies/The Use of Intelligence focuses on the importance of developing good information sources, specifically the five types of sources and how to manage them.

The interesting point here is that, given that our AI already has the game world at its e-fingertips, we haven’t had to accurately simulate the gathering of intelligence information. That has changed in recent years as the technology has allowed us to burn more resources on the problem. We now regularly simulate the AI piercing the Fog of War through scouts, etc. It is only a matter of time and tech before we get even more detailed in this area. Additionally, we will soon be able to model the AI’s belief of what we, the player, know of its disposition. This allows for intentional misdirection and subterfuge on the part of the AI. Now that will be fun!

Claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading…

Anyway, the point of all of this is that, while claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading, much of what he wrote of we as game AI programmers do already. Is there merit in reading his work to garner a new appreciation of how to think? Sure. Is it the miraculous godsend that it seems to be? Not likely.

In the mean time, marketing fluff aside, I look forward to seeing how it all plays out (so to speak) in the latest Total War installment. (Looks like I might get a peek at E3 next week anyway.)


Rubber-banding as a Design Requirement

May 31st, 2010

I’ve written about rubber-banding before over on Post-Play’em where I talked about my observations of how it is used in Mario Kart: Double Dash. Rubber-banding is hardly new. It is often a subtle mechanism designed to keep games interesting and competitive. It is especially prevalent in racing games.

For those that aren’t familiar, the concept is simple. If you are doing well, the competition starts doing well. If you are sucking badly, so do they. That way, you always have a race on your hands—regardless of whether you’re in first, the middle of the pack, or in back.

Sometimes, it can be more apparent than others. If competitors are teleporting to keep up, that’s a bit egregious. If you come to a dead stop in last place, and so do the other racers, that’s way too obvious. The interesting thing is that sometimes, it can be more than just about fairness and a running a good race. Sometimes it can be used because it is inherently tied to a design mechanic other than racing.

I saw this review on Thunderbolt of the new game Split/Second where it explains this phenomenon. The game is, on the surface, a racing game. However, in a mechanism borrowed from the aforementioned Mario Kart, the gameplay heavily revolves around “power plays“. These involve triggering things like exploding cars alongside the road, crumbling buildings, and helicopters dispensing explosives. You can even trigger massive changes for your foes like changing the route entirely. Needless to say, that has the effect of annoying the piss outta your opponents. The problem comes when those opponents are not human ones, but AI.

As the Thunderbolt reviewer puts it,

Split/Secondisn’t too difficult until some of the latter stages of the career, but unfair AI is a common problem throughout. It’s testament to the game’s focus on power plays that this unfair AI often occurs, since being in the lead isn’t a particularly fun experience when you can’t trigger the game’s main selling point. As a result, you’ll often find the following pack extremely close behind, often catching up six second gaps within two. Even when you know your car is much faster and you’re driving the race of your life, the AI finds a way to pass you with relative ease, performing impossibly good drifts and respawning from wrecks in the blink of an eye. Dropping from first place to fifth is such a common occurrence it would actually be quite comical if it weren’t for the frustration involved. That’s not to say Split/Second is a hard game – it’s usually pretty easy to wreck opponents with a decent power play, and you’ll normally be given ample opportunities to pass them – but the rubber band AI does cause some unwieldy races where the AI will pull ahead rather than keeping at a more realistic, surmountable distance.

As you can see above (emphasis mine), the rubber-banding is about more than keeping the pack close behind you if you are doing well. In order for the power plays to be relevant, you can’t be in the lead. You need to be behind them to use them. This is analogous to the “blue shell” in Mario Kart that would streak from wherever you were all the way up to the front of the pack and tumble the first-place kart. You simply can’t use the blue shell if you are in first place. In fact, the game won’t give you one if you are in first.

In Split/Second, the whole point of the game is blowing crap up and screwing with the other drivers. In fact, most of the fun of that is actually seeing it happen. While, in Mario Kart, you can use red or green shells, bananas, and fake blocks to mess with people behind you (and this is a perfectly normal tactic), the result of that isn’t the visually stunning and exciting experience that the power play in Split/Second is. Therefore, the designers of Split/Second had to make sure that you were able to use the power plays and see them in action.

In Split/Second the entire point of the rubber-banding is to make sure you aren’t in first—at least not for very long.

The difference between these two approaches is subtle. The rubber-banding in Mario Kart makes sure that, if you are in first, you can’t make a mistake without having people pass you. In Split/Second the entire point of the rubber-banding is to make sure you aren’t in first—at least not for very long.

You have to wonder how this mechanism would translate over to a shooter game, however. If rubber-banding in a shooter is to make sure you are challenged to a good fight… but one that in which you eventually triumph, a change to one where the AI is supposed to ensure that you don’t triumph would be a bit awkward. In fact, that would be negating the game’s purpose of having you see the rest of the content down the road (so to speak). AI in shooters is really cut from the same template of the rubber-banding in Mario Kart then. “Do well, and then lose.” It was just interesting to see a different take on this mechanism that generates a different outcome for the perfectly viable reason of making the game better.


Random Shooting Galleries are No Longer Fun

May 29th, 2010

Just saw another observation in a game review that really isn’t all that specific to the one game they were reviewing. In this case, it was a CNet review of the game, Alpha Protocol, by Obsidian Entertainment. A few of the observations about the AI were corroborated by other reviews elsewhere, but this one is the most detailed. (Most of the meat about the AI is on page 2 of the review.) The author doesn’t dance around the issue, either:

The AI is pretty dreadful. Security agents and mercenaries run about the levels in haphazard ways, may start climbing ladders as you fill them with lead, will kneel on top of exploding barrels, or might stare directly at you but fail to react unless you take a shot or give them a good punch. There’s a weird sense of randomness to your enemies’ behavior that diminishes the impact firefights may have had.

This is a common series of complaints about modern game AI. 10-20 years ago, this is mostly what we expected. Actually, to correct myself, what we usually got was enemies that turned to face us and then moved toward us in a line until we mowed them down. In the days of Doom and Quake, that was OK. Today? Not so much. Running around randomly provides a sense of activity and motion but immediately begins to trigger our sense of wrongness about the situation. This is especially noticeable as the dichotomy between excellent graphics and poor AI spreads. In any enemy that we can remotely anthropomorphize, this effect is even worse because we have an image in our head of what a human-like character should be doing in any given situation.

Where I would like to diverge from the author of the review is in the word “random”. We need to be careful with that word. Both in my book, Behavioral Mathematics for Game AI and in a number of the conference lectures I have given on the subject, I speak about the benefits of using randomness to provide variation in behavior selection. Games of all types are beginning to use these techniques as well.

For example, Richard Evans (Maxis) has spoken numerous times on the decision model for the Sims 3. The Sims go about a process of rating all the available behaviors and scoring them as to how well they match what the Sim wants or needs at the time. Then, they select from the top choices using what we call a “weighted random”. All of the actions are “in play” proportionate to their score, but the better the selection, the more likely it will be chosen. Is there randomness there? Yes, to provide variation. Does it look random? Not really. The reason is because each of the potential selections is actually fairly reasonable at the time — the result of the scoring system. To us as observers, we don’t view this as “random” — just “interesting”.

On the other hand, it seems that the behavior selection in Alpha Protocol looks random to the observer because either:

  1. The behavior selection is, indeed, random, or
  2. The behavior scoring algorithm is so poor that it doesn’t properly give the advantage to reasonable-looking actions.
Often times, behaviors will be chosen without consulting the current world state.

Either way, something is amiss. There is a 3rd option here as well. Often times, behaviors will be chosen without consulting the current world state. The “climbing ladders while you’re filling them with lead” bit might be an example of that but the observation where the enemy is seemingly unaware of you is a much better example. The bottom line is that the hardest part of designing AI is providing the adequate knowledge representation of the world state so that we can reason on it and make contextually appropriate decisions.

From a gameplay standpoint, the behaviors above can lead to some serious disappointment. To go back to the article:

Yet Alpha Protocol is no more a proper stealth game than it is a shooter. As with the shooting, the inconsistent AI provides a major hindrance… [snip]… Sneaking up on an enemy and taking him down with a minimum of fuss is mildly rewarding, as it tends to be in most games. But the actions you take leading up to that point involve activating certain skills and scurrying around in your silly crouched position–not outsmarting sharp AI or using the environment in clever ways.

With a generally random-acting AI, we aren’t outwitting anything.

We have had enough stealth games under our belt as an industry that we have primed the consumer with expectations of what to expect. Games ranging from Thief to Splinter Cell have shown us that the best part of stealth games is not just surprising an enemy… but outwitting him when he is actively trying to prevent you from surprising him. With a generally random-acting AI, we aren’t outwitting anything. The “surprise” aspect of it comes from merely staying out of his view. For all intents and purposes, that gameplay mechanic goes back to arcade shooters from the early 80’s. Haven’t we grown out of that yet?

Unfortunately, in a throw-back to my column earlier this week, the review goes on to compliment some of the other production values (though not nearly in as glowing of terms as the review of Lost Planet 2:

Alpha Protocol is not ugly, however; it’s just behind the times and artistically uninspired. Nevertheless, the safe houses Mike operates from between missions have some nice views, and some of the outdoor missions throw in some welcome flashes of color. Similarly, the sound design gets the job done, though without much style. The voice acting is at least solid, and the generic action-movie soundtrack ramps up at the right moments but otherwise stays out of the way.

Congratulations, I suppose. The problem is, our consumer public wants more than pretty pictures and nice sound. They are getting used to all of that and are now complaining about things being dumb. Again, this goes well beyond Alpha Protocol. This review can be copy-and-pasted to other games in as much of a templated fashion as trashy romance novels. There are way too many games that are missing the boat on this. Granted, as I mentioned in the Lost Planet 2 column, good AI developers are increasingly hard to find. That may be the case with Obsidian. I don’t know. (What I do know is that it’s time for me to give a ring to their HR guy.)


Everything is Excellent… except the AI

May 28th, 2010

This is something that I have been seeing for a long time now. I’m sure we all have… and for good reason: A lot of times it is true. Despite what I said in my 7 minutes at the AI Devs Rant at the 2010 GDC about how reviewers like to bitch about bad AI, unfortunately too often it is justified. The juxtaposition of an otherwise good game in other areas with poorly executed AI development is a bit more tragic, however. That doesn’t point to a case of a smaller budget game. It’s an example of a well-funded project with either a priority or a talent problem.

Take this review of “Lost Planet 2” in the Detroit News by Mike Neimoyer. Let me first give you the commentary about the rest of the aspects of the game:

The storyline is thinly tied together and barely cohesive.

OK… admittedly he gripes about the storyline. A lot of times that simply because you are doing a sequel of an established IP. Moving on… (emphasis mine):

The graphics are beautiful, and the environments are varied. Lush jungle or swamp areas appear suddenly in the midst of the glacial ice fields, with waterfalls and towering trees, and then there’s parched desertscapes or weather-battered coastal regions.

Not only is the landscape new and varied, but the Akrid, the natural inhabitants of the planet E.D.N. III, are back with new shapes, new designs and a whole new set of (usually grumpy) attitudes. The scale of the largest of the creatures, the “Cat-G type” is impressive to say the least. Like, God of War III-scale impressive.

The voice acting, for the most part, is well-done and up to the task. The music itself, however, is excellent. Swelling orchestral pieces accentuate the action sequences, and give the game an epic feel that would have been missing if they had used some “generic rock-style track #2” soundtrack. Well done, Capcom.

Ok… so we have this love fest on the environment, the character modeling, the voice acting, and the soundtrack (he even mentions later that he would love to have a CD of the soundtrack). So what can possibly be wrong? Here’s a montage (again, emphasis mine):

The game is designed from the ground up to take advantage of four-player cooperative play. And heaven help you if you don’t have friends to play the game with. As 1UP.com states, “Brain-dead, unhelpful, and unresponsive, the computer-controlled team members are a liability rather than a resource.

You truly, truly need a human companion or three to completely appreciate what Lost Planet 2 has to offer. For example, during one big boss battle, there are four separate tasks that need to be completed simultaneously. With four humans working together, this bit of teamwork wouldn’t be too difficult. Unfortunately, if you’re playing solo with AI teammates, you’re pretty much left to a snarled tangle of frustration and trial-and-error.

The level design is adequate, but I think that too much emphasis was put on the multiplayer portion, and not enough consideration for the solo player who will be reduced to using the criminally stupid AI companions.

Damn… so we have a Left 4 Dead-style game that is based around the idea that you have to cooperate with your teammates in order to not only survive but to actually complete mandatory parts of the campaign… and yet they don’t provide you with the companions that can do so.

Early on we were all letting our enemies die quickly because we lacked the capability to make them smarter.

In the era of single-player, shooting gallery-style games, having sub-par AI wasn’t too bad. After all, our fallback mantra was “the enemy won’t live long enough to show off his AI anyway.” I knew that was just a bad crutch when we were all saying it. The truth is, early on we were all letting our enemies die quickly because we lacked the capability to make them smarter! We were actually relieved that our characters were dying quickly. That managed to fit well with our other AI mantra: “Don’t let the AI do anything stupid.” Unfortunately, the chance of the AI doing something stupid rose exponentially with the amount of time that it was visible (and alive).

We are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore.

Now that we are expecting AI teammates or squadmates or companions to come along for the ride, we have a much harder challenge. (Back in 2008, I wrote about this in my (at the time) regular column on AIGameDev in an article titled “The Art of AI Sidekicks: Making Sure Robin Doesn’t Suck“.) The problem is, we are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore. Certainly, it takes more money and time to develop really good AI than it does to do a soundtrack, (I can speak to this, by the way… I was a professional musician a long time ago and am perfectly comfortable with anything from writing, arranging, and recording multi-track electronic grooves to penning entire sweeping orchestral scores. But that was in a previous life.) but it seems like a little effort might be called for. After all, the necessity of multi-player was built into the game design from the start… the necessity of a lush soundtrack was not.

To the defense of game companies, however, I’m very aware the good AI people are exceedingly and increasingly hard to find. The focus of the industry has changed in the past few years so that companies are trying to do better. However, that often means a lot more AI-dedicated manpower than they have. With many companies trying to find AI people all over the place, the demand has really out-stripped the supply. Some companies have had ads up for AI programmers for 6 to 9 months! They just aren’t out there.

Perhaps this is a time to pitch
Intrinsic Algorithm’s
AI consulting services?

And now back to our program…

So it isn’t always that the company doesn’t care or won’t spend the money on it. It’s often just the fact that AI is a very difficult problem that calls for a very deep skill set. Unfortunately, most of the game programs that exist really don’t even address game AI beyond “this is a state machine”. Academic AI programs are good for “real world AI” but don’t apply to the challenges that the game industry needs. Unfortunately, many academic AI institutions and their students don’t know this until they are rebuffed for suggesting very academia-steeped techniques that will fall flat in practice. (And no, a neural network would not have saved the AI in Lost Planet 2.)

So… in the mean time, here’s the suggestion: If your people don’t have the chops to make the required minimum AI, don’t design a game mechanic that needs that AI.


Dating Sims: A New Frontier for RPG AI?

May 27th, 2010

It’s amazing what pops up in my daily Google Alerts for “game AI” (although I’m getting tired of Allen Iverson news). This one caught my eye, though. On a blog called “Win My Ex Back” (no thank you, by the way), there was a post titled Online Dating Sim Game. There wasn’t a lot of detail about a specific game other than to mention online that “dating simulation games are among the new genres of online gaming that depicts romance.” To give an idea of what the author (Andy Jill) has in mind, I quote his summary:

It’s a simulation game where the main character that you’ll play (commonly fictional characters) has to achieve specific goals. The most typical one is to date numerous and different women and to have high level of relationship and among them within specified time limit. Generally, the game character must have enough funds by either securing jobs or other income-generating activities such as business.

In the same manner, attributes of the character is important in the game. Such attributes can be improved by doing different task and accomplishing it within the time limit. Most of these tasks take time to be accomplished and games of this type have real-time to them.

The author goes on to describe what apparently was the first online dating Sim, “the Dokyusei or Classmates” from 1992. Again to quote:

In this classic dating sim game, you will be controlling a male avatar that is surrounded by various female game characters. The game play will involve conversations with a variety of artificial intelligence (AI)-controlled girls, in which you will attempt to increase your “internal love meter” by means of right choices of dialogues. The game usually last for a limited game time like a month or a year.

Once the game is finished, your character may lose the game if it failed to win the hearts of any girl. However, you may “finish” one or more girls, usually by having sex with them or by attaining eternal love.

18 years later and we are still relegated to dialog trees.

To me, this sounds like the Sims titles… or for that matter, how some people try to play Mass Effect 2. Anyway, the point is not the gameplay mechanism (which is somewhat standard). Even the personality and mood meter facets are not all that uncommon. Again, think Sims 3. What I find almost joltingly alarming is that this game from 1992 was based on a dialog tree. Sure, that’s fine. We’ve had dialog trees for a while. The problem is… that’s how we would still have to do things! 18 years later and we are still relegated to dialog trees.

The only reason that the Sims doesn’t have dialog trees is that there isn’t real “dialog”. At least not the intelligible sort. Sure, there are behavior selection trees for choosing what to do next, but there is a subtle difference. When you select an action in the Sims, the game designer hasn’t necessarily hand-crafted what the response (or potential responses) will be. In a dialog tree, you are always in a specific place in that tree until you exit it. In the Sims, all that happens when you select an action is that you vaguely change the internal state of the character you are interacting with. That character’s actual response is calculated in a ridiculously expansive set of state values, formulas, and then a stochastic factor tossed in for good measure. You aren’t really ever sure exactly what you are going to get… although you may have a good idea what it might be.

On the user input level, it is still reminiscent of Zork or the early Ultima games.

I have to assume that this dating sim — and all others like it — would rely on a representation of actual dialog, however. And that brings us back to the dialog tree. Natural language processing isn’t going to cut it. Emily Short does a good job of it in her interactive fiction but the root of it all is still keyword-based input. As amazing as her work is, on the user input level, it is still reminiscent of Zork or the early Ultima games. Translated into a dating sim, the user’s free-form input could very well be reduced to “ask job”, “tell age”, and “compliment boobs”. In effect, it wouldn’t be all that different than the chat room shorthand of “a/s/l” for “age/sex/location”. Even if the game then gave elaborate (yet pre-scripted) answers to your questions, it will still be annoying to have it reply “I don’t understand what you mean” when you don’t guess the right keywords to use. Additionally, using that sort of shorthand isn’t going to ever feel really “romantic” is it?

I’m halfway through Noah Wardrip-Fruin’s book, Expressive Processing, where he talks extensively about the history and state of interactive dialog and drama. Even with all the work that has gone into this field for over 40 years, I’m sorry to say that we are no where near being able to replicate something other than a stilted parody of a romantic courtship.

That being said, it doesn’t matter how deep the AI is behind the scenes. Until we can solve the input/output problem our AI is trapped inside itself. Animation has gotten a lot better of late—especially facial and speech animation. I know there are adult-themed games out there and I assume that they are taking full advantage of photo-realism (not to mention realistic physics). However, all that nifty facial animation and subtle gesturing would still have to be tied to canned, pre-written dialog. And that is our bottleneck, isn’t it?

I don’t know how to solve it, really. Noah’s book is my first foray into even thinking about this interactive dialog and fiction stuff. (On the other hand, I would be tickled to do the personality, mood, and emotion modeling. That is in my wheelhouse!) That said, I don’t know how to solve it. I just find it sad that we are still stuck in this world where we can’t really interact with our game characters on a meaningful, natural-feeling level. I do know, however, that when we find it, that will be one of the final cusps we need to cross over in games. At that point, there’s not a lot stopping us.


Add to Google Reader or Homepage




Content 2002-2015 by Intrinsic Algorithm L.L.C.

OGDA