IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘F.E.A.R.’

Sun Tzu as a Design Feature?

Saturday, June 5th, 2010

Total War creator, The Creative Assembly, has announced the development of the latest in the line of acclaimed RTS games, Shogun 2. While the Total War franchise has a 10-year history and is fairly well-known for its AI,  this blurb from their web site has spread through the web like an overturned ink well:

Featuring a brand new AI system inspired by the scriptures that influenced Japanese warfare, the millennia old Chinese “Art of War”, the Creative Assembly brings the wisdom of Master Sun Tsu to Shogun 2: Total War. Analysing this ancient text enabled the Creative Assembly to implement easy to understand yet deep strategical gameplay.

Sun Tzu‘s “The Art of War” has been a staple reference tome since he penned it (or brushed it… or whatever) in the 6th century B.C. It’s hard to find many legends that have made it for over 20 centuries. Its applications have been adapted in various ways to go beyond war to arenas such as business and politics. Suffice to say that “The Art of War” lives on as “things that just make sense”.

The problem I have here is that this seems to be more of a marketing gimmick than anything. After all, most of what Sun Tzu wrote should, in various forms, already be in game AI anyway.  To say Sun Tzu’s ideas are unique to him and would never have been considered without his wisdom is similar to saying that no one thought that killing was a bad idea until Moses wandered down the hill with “Thou Shalt Not Kill” on a big ol’ rock. No one stood around saying, “Gee… ya think?” Likewise, Sun Tzu’s advice about “knowing your enemy” is hardly an earth-shattering revelation.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War. Things like “staying in cover and waiting for the enemy to attack you” come to mind. Of course, in the game world, we call that “camping” (as an individual) or “turtling” (as a group). I can imagine a spirited argument as to whether a camping/turtling AI is necessarily What Our Players Want™, however. It certainly beats the old “Doom model” of “walk straight towards the enemy”.

And what about the Sun Tzu concept of letting your two enemies beat the snot out of each other before you jump in? (I believe there are translations that yielded “dog shit” rather than “snot” but the meaning is still clear.) If you are in an RTS and one enemy just sits and waits for the other one whack you around a little bit, it’s going to look broken. On the other hand, I admit to doing that in free-for-all Starcraft matches… because it is a brutal tactic!

The problem I have with their claim is that we already do use many of his concepts in game AI.

The problem I have with their claim, however, is that there are many concepts in the Art of War that we already do use in game AI. By looking at Sun Tzu’s chapter headings (or whatever he called them) we can see some of his general ideas:

For ease of reference, I pillage the following list from Wikipedia:

  1. Laying Plans/The Calculations
  2. Waging War/The Challenge
  3. Attack by Stratagem/The Plan of Attack
  4. Tactical Dispositions/Positioning
  5. Energy/Directing
  6. Weak Points & Strong/Illusion and Reality
  7. Maneuvering/Engaging The Force
  8. Variation in Tactics/The Nine Variations
  9. The Army on the March/Moving The Force
  10. The Attack by Fire/Fiery Attack
  11. The Use of Spies/The Use of Intelligence

Going into more detail on each of them, we can find many analogues to existing AI practices:

Laying Plans/The Calculations explores the five fundamental factors (and seven elements) that define a successful outcome (the Way, seasons, terrain, leadership, and management). By thinking, assessing and comparing these points you can calculate a victory, deviation from them will ensure failure. Remember that war is a very grave matter of state.

It almost seems to easy to cite planning techniques here because “plans” is in the title. I’ll go a step further then and point out that the practice of collecting information and assessing the relative merits of the selection, you can determine potential outcomes or select correct paths of action. This is a common technique in AI decision-making calculations. Even the lowly min/max procedure is, in essence simply comparing various potential paths through the state space.

Waging War/The Challenge explains how to understand the economy of war and how success requires making the winning play, which in turn, requires limiting the cost of competition and conflict.

This one speaks even more to the min/max approach. The phrase “limiting the cost of competition and conflict” expresses the inherent economic calculations that min/max is based on. That is, I need to get the most bang for my buck.

Attack by Stratagem/The Plan of Attack defines the source of strength as unity, not size, and the five ingredients that you need to succeed in any war. In order of importance attack: Strategy, Alliances, Army, lastly Cities.

Any coordinating aspects to the AI forces falls under this category. For example, the hierarchical structure of units into squads and ultimately armies is part of that “unity” aspect. Very few RTS games send units into battle as soon as they are created. They also don’t go off and do their own thing. If you have 100 units going to 100 places, you aren’t going to have the strength of 100 units working as a collection.  This has been a staple of RTS games since their inception.

Tactical Dispositions/Positioning explains the importance of defending existing positions until you can advance them and how you must recognize opportunities, not try to create them.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”. More importantly, individual or squad tactics that do leapfrogging, cover-to-cover, movement is something that has been addressed in various ways for a number of years. Not only in FPS games do we see this (e.g. F.E.A.R.), but even in some of the work that Chris Jurney did originally in Company of Heroes. Simply telling a squad to advance to a point didn’t mean they would continue on mindless of their peril. Even while not under fire, they would do a general cover-to-cover movement. When engaged in combat, however, there was a very obvious and concerted effort to move up only when the opportunity presented itself.

This point can be worked in reverse as well. The enemies in Halo 3, as explained by Damián Isla in his various lectures on the subject, defend a point until they can no longer reasonably do so and then fall back to the next defensible point. This is a similar concept to the “advance” model above.

Suffice to say, whether it be advancing opportunistically or retreating prudently, this is something that game AI is already doing.

Energy/Directing explains the use of creativity and timing in building your momentum.

This one is a little more vague simply because of the brevity of the summary on Wikipedia. However, we are all well aware of how some games have diverged from the simple and stale “aggro” models that were the norm 10-15 years ago.

Weak Points & Strong/Illusion and Reality explains how your opportunities come from the openings in the environment caused by the relative weakness of your enemy in a given area.

Identifying the disposition of the enemy screams of influence mapping…

Identifying the disposition of the enemy screams of influence mapping—something that we have been using in RTS games for quite some time. Even some FPS and RPG titles have begun using it. Influence maps have been around for a long time and their construction and usage are well documented in books and papers. Not only do they use the disposition of forces as suggested above, but many of them have been constructed to incorporate environmental features as Mr. Tzu (Mr. Sun?) entreats us to do.

Maneuvering/Engaging The Force explains the dangers of direct conflict and how to win those confrontations when they are forced upon you.

Again, this one is a bit vague. Not sure where to go there.

Variation in Tactics/The Nine Variations focuses on the need for flexibility in your responses. It explains how to respond to shifting circumstances successfully.

This is an issue that game AI has not dealt with well in the past. If you managed to disrupt a build order for an RTS opponent, for example, it might get confused. Also AI was not always terribly adaptive to changing circumstances. To put it in simple rock-paper-scissors terms, if you kept playing rock over and over, the AI wouldn’t catch on and play paper exclusively. In fact, it might still occasionally play scissors despite the guaranteed loss to your rock.

Lately, however, game AI has been far more adaptive to situations. The use of planners, behavior trees, and robust rule-based systems, for example, has allowed for far more flexibility than the more brittle FSMs allowed for. It is much harder to paint an AI into a corner from which it doesn’t know how to extricate itself. (Often, with the FSM architecture, the AI wouldn’t even realize it was painted into a corner at all and continue on blissfully unaware.)

The Army on the March/Moving The Force describes the different situations inf them.

[editorial comment on the above bullet point: WTF?]

I’m not sure to what the above refers, but there has been a long history of movement-based algorithms. Whether it be solo pathfinding, group movement, group formations, or local steering rules, this is an area that is constantly being polished.

The Attack by Fire/Fiery Attack explains the use of weapons generally and the use of the environment as a weapon specifically. It examines the five targets for attack, the five types of environmental attack, and the appropriate responses to such attack.

For all intents and purposes, fire was the only “special attack” that they had in 600 BC. It was their BFG, I suppose. Extrapolated out, this is merely a way of describing when and how to go beyond the typical melee and missile attacks. While not perfect, actions like spell-casting decisions in an RPG are not terribly complicated to make. Also, by tagging environmental objects, we can allow the AI to reason about their uses. One excellent example is how the agents in F.E.A.R. would toss over a couch to create a cover point. That’s using the environment to your advantage through a special (not typical) action.

The Use of Spies/The Use of Intelligence focuses on the importance of developing good information sources, specifically the five types of sources and how to manage them.

The interesting point here is that, given that our AI already has the game world at its e-fingertips, we haven’t had to accurately simulate the gathering of intelligence information. That has changed in recent years as the technology has allowed us to burn more resources on the problem. We now regularly simulate the AI piercing the Fog of War through scouts, etc. It is only a matter of time and tech before we get even more detailed in this area. Additionally, we will soon be able to model the AI’s belief of what we, the player, know of its disposition. This allows for intentional misdirection and subterfuge on the part of the AI. Now that will be fun!

Claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading…

Anyway, the point of all of this is that, while claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading, much of what he wrote of we as game AI programmers do already. Is there merit in reading his work to garner a new appreciation of how to think? Sure. Is it the miraculous godsend that it seems to be? Not likely.

In the mean time, marketing fluff aside, I look forward to seeing how it all plays out (so to speak) in the latest Total War installment. (Looks like I might get a peek at E3 next week anyway.)

Fritz Heckel’s Reactive Teaming

Tuesday, May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

F.E.A.R. sequel promises "visual density"

Wednesday, January 30th, 2008

I noticed this GamePro blurb about the upcoming sequel to F.E.A.R. Here’s an excerpt…

“The most obvious difference that will hit the player right away is in the visual density of the world,” said Mulkey. “F.E.A.R. looked really great, but where F.E.A.R. would have a dozen props in a room to convey the space, Project Origin will have five times that much detail.

“Of course, this will only serve to further ratchet up that ‘chaos of combat’ to all new levels with more breakables, more debris, more stuff to fly through the air in destructive slow motion beauty.”

OK… I can dig that. One thing I noticed as I played through F.E.A.R. is that things were kinda sparse. (I really got tired of seeing the same potted cactus, too.)

The part that I am curious about, however is this:

… Mulkey says improved enemy behavior is at the top of the list.

“We are teaching the enemies more about the environment and new ways to leverage it, adding new enemy types with new combat tactics, ramping up the tactical impact of our weapons, introducing more open environments, and giving the player the ability to create cover in the environment the way the enemies do,” he says.

Now that is the cool part. When the enemies in the original moved the couches, tables, bookshelves, etc. it was cool… but rather infrequent. I was always expecting them to do more with it. If they are both adding objects to the environment and then “teaching” the agents to actually use those objects, we may see a level of environment interactivity that we’ve never experienced before.

The cool thing about their planning AI structure is that there isn’t a completely rediculous ramp-up in the complexity of the design. All one needs do is tag an object that it can be used in a certain way and it gets included into the mix. On the other hand, having more objects to use and hide behind does increase the potential decision space quite a bit. It’s like how the decision tree in chess is far greater than that of Tic-tac-toe because there are so many more options. The good news is that the emergent behavior level will go through the roof. The bad news is that it will hit your processor pretty hard. Expect the game to be a beast to run on a PC.

I certainly am looking forward to mucking about with this game!

The AI of F.E.A.R

Thursday, November 15th, 2007


Once again, Alex Champandard at AIGameDev.com has posted another entry in his Reviews section. This review is on the very novel and groundbreaking AI of the FPS game, F.E.A.R.

As he typically does in his technical reviews, he specifically identifies 29 tricks to use in games. As with “The Sims“, a major component is that of “smart objects”. Also, how the AI agents in the game use those objects is very impressive. I’ve only played the downloadable demo of F.E.A.R. but I was impressed quickly by how the agents appeared to work together and with their environment. I would like to play the game further and then re-read this technical review.

As always, make a point of looking at the references section of his post. That is where the meat is.

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content ©2002-2010 by Intrinsic Algorithm L.L.C.

OGDA