IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘influence maps’

Sun Tzu as a Design Feature?

Saturday, June 5th, 2010

Total War creator, The Creative Assembly, has announced the development of the latest in the line of acclaimed RTS games, Shogun 2. While the Total War franchise has a 10-year history and is fairly well-known for its AI,  this blurb from their web site has spread through the web like an overturned ink well:

Featuring a brand new AI system inspired by the scriptures that influenced Japanese warfare, the millennia old Chinese “Art of War”, the Creative Assembly brings the wisdom of Master Sun Tsu to Shogun 2: Total War. Analysing this ancient text enabled the Creative Assembly to implement easy to understand yet deep strategical gameplay.

Sun Tzu‘s “The Art of War” has been a staple reference tome since he penned it (or brushed it… or whatever) in the 6th century B.C. It’s hard to find many legends that have made it for over 20 centuries. Its applications have been adapted in various ways to go beyond war to arenas such as business and politics. Suffice to say that “The Art of War” lives on as “things that just make sense”.

The problem I have here is that this seems to be more of a marketing gimmick than anything. After all, most of what Sun Tzu wrote should, in various forms, already be in game AI anyway.  To say Sun Tzu’s ideas are unique to him and would never have been considered without his wisdom is similar to saying that no one thought that killing was a bad idea until Moses wandered down the hill with “Thou Shalt Not Kill” on a big ol’ rock. No one stood around saying, “Gee… ya think?” Likewise, Sun Tzu’s advice about “knowing your enemy” is hardly an earth-shattering revelation.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War. Things like “staying in cover and waiting for the enemy to attack you” come to mind. Of course, in the game world, we call that “camping” (as an individual) or “turtling” (as a group). I can imagine a spirited argument as to whether a camping/turtling AI is necessarily What Our Players Want™, however. It certainly beats the old “Doom model” of “walk straight towards the enemy”.

And what about the Sun Tzu concept of letting your two enemies beat the snot out of each other before you jump in? (I believe there are translations that yielded “dog shit” rather than “snot” but the meaning is still clear.) If you are in an RTS and one enemy just sits and waits for the other one whack you around a little bit, it’s going to look broken. On the other hand, I admit to doing that in free-for-all Starcraft matches… because it is a brutal tactic!

The problem I have with their claim is that we already do use many of his concepts in game AI.

The problem I have with their claim, however, is that there are many concepts in the Art of War that we already do use in game AI. By looking at Sun Tzu’s chapter headings (or whatever he called them) we can see some of his general ideas:

For ease of reference, I pillage the following list from Wikipedia:

  1. Laying Plans/The Calculations
  2. Waging War/The Challenge
  3. Attack by Stratagem/The Plan of Attack
  4. Tactical Dispositions/Positioning
  5. Energy/Directing
  6. Weak Points & Strong/Illusion and Reality
  7. Maneuvering/Engaging The Force
  8. Variation in Tactics/The Nine Variations
  9. The Army on the March/Moving The Force
  10. The Attack by Fire/Fiery Attack
  11. The Use of Spies/The Use of Intelligence

Going into more detail on each of them, we can find many analogues to existing AI practices:

Laying Plans/The Calculations explores the five fundamental factors (and seven elements) that define a successful outcome (the Way, seasons, terrain, leadership, and management). By thinking, assessing and comparing these points you can calculate a victory, deviation from them will ensure failure. Remember that war is a very grave matter of state.

It almost seems to easy to cite planning techniques here because “plans” is in the title. I’ll go a step further then and point out that the practice of collecting information and assessing the relative merits of the selection, you can determine potential outcomes or select correct paths of action. This is a common technique in AI decision-making calculations. Even the lowly min/max procedure is, in essence simply comparing various potential paths through the state space.

Waging War/The Challenge explains how to understand the economy of war and how success requires making the winning play, which in turn, requires limiting the cost of competition and conflict.

This one speaks even more to the min/max approach. The phrase “limiting the cost of competition and conflict” expresses the inherent economic calculations that min/max is based on. That is, I need to get the most bang for my buck.

Attack by Stratagem/The Plan of Attack defines the source of strength as unity, not size, and the five ingredients that you need to succeed in any war. In order of importance attack: Strategy, Alliances, Army, lastly Cities.

Any coordinating aspects to the AI forces falls under this category. For example, the hierarchical structure of units into squads and ultimately armies is part of that “unity” aspect. Very few RTS games send units into battle as soon as they are created. They also don’t go off and do their own thing. If you have 100 units going to 100 places, you aren’t going to have the strength of 100 units working as a collection.  This has been a staple of RTS games since their inception.

Tactical Dispositions/Positioning explains the importance of defending existing positions until you can advance them and how you must recognize opportunities, not try to create them.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”. More importantly, individual or squad tactics that do leapfrogging, cover-to-cover, movement is something that has been addressed in various ways for a number of years. Not only in FPS games do we see this (e.g. F.E.A.R.), but even in some of the work that Chris Jurney did originally in Company of Heroes. Simply telling a squad to advance to a point didn’t mean they would continue on mindless of their peril. Even while not under fire, they would do a general cover-to-cover movement. When engaged in combat, however, there was a very obvious and concerted effort to move up only when the opportunity presented itself.

This point can be worked in reverse as well. The enemies in Halo 3, as explained by Damián Isla in his various lectures on the subject, defend a point until they can no longer reasonably do so and then fall back to the next defensible point. This is a similar concept to the “advance” model above.

Suffice to say, whether it be advancing opportunistically or retreating prudently, this is something that game AI is already doing.

Energy/Directing explains the use of creativity and timing in building your momentum.

This one is a little more vague simply because of the brevity of the summary on Wikipedia. However, we are all well aware of how some games have diverged from the simple and stale “aggro” models that were the norm 10-15 years ago.

Weak Points & Strong/Illusion and Reality explains how your opportunities come from the openings in the environment caused by the relative weakness of your enemy in a given area.

Identifying the disposition of the enemy screams of influence mapping…

Identifying the disposition of the enemy screams of influence mapping—something that we have been using in RTS games for quite some time. Even some FPS and RPG titles have begun using it. Influence maps have been around for a long time and their construction and usage are well documented in books and papers. Not only do they use the disposition of forces as suggested above, but many of them have been constructed to incorporate environmental features as Mr. Tzu (Mr. Sun?) entreats us to do.

Maneuvering/Engaging The Force explains the dangers of direct conflict and how to win those confrontations when they are forced upon you.

Again, this one is a bit vague. Not sure where to go there.

Variation in Tactics/The Nine Variations focuses on the need for flexibility in your responses. It explains how to respond to shifting circumstances successfully.

This is an issue that game AI has not dealt with well in the past. If you managed to disrupt a build order for an RTS opponent, for example, it might get confused. Also AI was not always terribly adaptive to changing circumstances. To put it in simple rock-paper-scissors terms, if you kept playing rock over and over, the AI wouldn’t catch on and play paper exclusively. In fact, it might still occasionally play scissors despite the guaranteed loss to your rock.

Lately, however, game AI has been far more adaptive to situations. The use of planners, behavior trees, and robust rule-based systems, for example, has allowed for far more flexibility than the more brittle FSMs allowed for. It is much harder to paint an AI into a corner from which it doesn’t know how to extricate itself. (Often, with the FSM architecture, the AI wouldn’t even realize it was painted into a corner at all and continue on blissfully unaware.)

The Army on the March/Moving The Force describes the different situations inf them.

[editorial comment on the above bullet point: WTF?]

I’m not sure to what the above refers, but there has been a long history of movement-based algorithms. Whether it be solo pathfinding, group movement, group formations, or local steering rules, this is an area that is constantly being polished.

The Attack by Fire/Fiery Attack explains the use of weapons generally and the use of the environment as a weapon specifically. It examines the five targets for attack, the five types of environmental attack, and the appropriate responses to such attack.

For all intents and purposes, fire was the only “special attack” that they had in 600 BC. It was their BFG, I suppose. Extrapolated out, this is merely a way of describing when and how to go beyond the typical melee and missile attacks. While not perfect, actions like spell-casting decisions in an RPG are not terribly complicated to make. Also, by tagging environmental objects, we can allow the AI to reason about their uses. One excellent example is how the agents in F.E.A.R. would toss over a couch to create a cover point. That’s using the environment to your advantage through a special (not typical) action.

The Use of Spies/The Use of Intelligence focuses on the importance of developing good information sources, specifically the five types of sources and how to manage them.

The interesting point here is that, given that our AI already has the game world at its e-fingertips, we haven’t had to accurately simulate the gathering of intelligence information. That has changed in recent years as the technology has allowed us to burn more resources on the problem. We now regularly simulate the AI piercing the Fog of War through scouts, etc. It is only a matter of time and tech before we get even more detailed in this area. Additionally, we will soon be able to model the AI’s belief of what we, the player, know of its disposition. This allows for intentional misdirection and subterfuge on the part of the AI. Now that will be fun!

Claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading…

Anyway, the point of all of this is that, while claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading, much of what he wrote of we as game AI programmers do already. Is there merit in reading his work to garner a new appreciation of how to think? Sure. Is it the miraculous godsend that it seems to be? Not likely.

In the mean time, marketing fluff aside, I look forward to seeing how it all plays out (so to speak) in the latest Total War installment. (Looks like I might get a peek at E3 next week anyway.)

Starcraft AI Exposed Through a "Bot Flight"

Sunday, April 13th, 2008

Wow. A dude by the name of Shamus Young wrote a great column on the AI of Starcraft on his blog, Twenty-Sided. In fact, it would be worth of a Post-Play’em entry and I almost toyed with writing this post over there. Anyway, he wrote a script that allows 7 AI players to battle it out with the 8th slot being left for the player to control no units but observe the entire map. Often, he would let the game run overnight and see what had transpired come morning. His observations were rather interesting and served to do two things:

  1. Point out the various strengths and weaknesses of the different races
  2. Point out that the AI is largely a rule-based engine with not a lot of forethought, planning or strategic processing… and definitely no learning capabilities.

While, as always, I encourage you to read his article at his blog, I do want to break down a couple of his observations here.

At first I just set the difficulty to “normal”, but I found that the computer players were far too likely to consume all the resources on the map, go broke, and then just sit there. I’d start a game before going to bed, and when I came back in the morning I’d find the battle was down to three sides who couldn’t make any fighting units. I changed the difficulty to “Insane”, which auto-cheats by giving itself 2,000 minerals and gas anytime it goes broke, meaning the thing is always rolling in resources.

It is my opinion that this is a major failing of the game. He points out that on the “insane” difficulty level the AI just simply gives itself more resources if it runs out. Because of that, there is no “end game” scenario that the computer has to plan for. There is no timer ticking down for it. However, there certainly is for the player. You need to make your move before your resources go dry. However, the computer is still playing “as if” the resources are still important. Shamus goes on…

I dislike this auto-cheat for a number of reasons, mostly because it negates a lot of the strategy in the game. None of the players can go broke, but the AI still plays as though resources were important. The only true way to knock a player out of the game is to annihilate their core base with all the critical buildings in it. Expansion bases are (mostly) worthless in a game like this. Yet the computer still builds and defends expansions (because that’s what it’s programmed to do) and still wastes time attacking enemy expansion bases. This introduces a bit of luck into the game: Who wins depends a lot on positioning. The AI tends to attack the nearest base, not the most important one. Sometimes the nearest base is the core base. Sometimes the nearest is an expansion which is pointlessly destroyed and rebuilt over and over again.

Now, that may be by design. I don’t know. From a mentality standpoint, the idea could be not to secure an expansion in order to secure the resources for yourself but rather to keep them from the player(s). That certainly makes sense. I doubt, however, that was the case. It is likely that the 2000 rule was simply a cheat to make things miserable for the player. *shrug* It’s an RTS staple, I suppose.

Moving on, the next section reeks of a scripted build order.

Without the unpredictable actions of a human involved in the match, the AI fights like clockwork. Games have a perceptible rhythm to them. They all build their first couple of buildings within a few seconds of each other. Even these variations are probably the result of minor changes in the layout of each base. If the bases were somehow shaped the same, the players would probably all build in perfect unison.

They build an initial attack force. On insane difficulty this attack force is huge – probably sixteen or so zealots or the given race equivalent. On normal difficulty the force is smaller, but the behavior is the same. They all leave the base at about the same time, and (as far as I can tell) attack a foe totally at random.

That means there may be some sort of rule-based system in place. Age Of Empires/Kings did this exact same thing. All the AIs would seem like they were working in sync, but that was because there all had the same timer settings (i.e. “If Timer = 1500, launch attack”).

For the most part, I can understand doing it this way. Short of putting together some sort of GOAP system in place to sequence individual actions into a cohesive plan, the only thing left would be a random build order… which would start to look silly. One thing that he didn’t mention here is if the build order was always the same… especially in the middle phases. It would be nice to see that the computer doesn’t play an individual race the same way every time. That would be something that, right out of the gate, would start to mix things up a bit.

The interesting bit is “attack a foe at random”. I would say that is, again, just a simplicity cheat. It would be a little mathematically intensive to process it further. However, there may be something not accounted for here. Since all the AIs are in sync at that point of the game, if there IS a mathematical scoring system, they would all come out the same. Therefore, in case of ties, taking “the top of the list” may tend to look random. The obvious fix that was missing, though, is a proximity factor. This helps in two ways:

  • It’s quicker and more manageable to wage war with a foe that is nearby
  • It’s easier to recall forces into a defensive perimeter when your they aren’t across the map

Shamus touches on this:

Nobody has any defenses at this point, so when an enemy comes knocking, their survival is a matter of luck: The defender has an attack force elsewhere on the
map. Is that force still intact, and can it be recalled before the place is destroyed? Are there enough of them left to save the place?

Another artifact of this “shoot anything that moves” strategy is that the AI is truly engaging all 6 of the opponents at all times rather than focusing on one or two of the most threatening (by strength or proximity).

A player can also do really well if they are attacked by two people at exactly the same time – the attackers end up wiping each other out and leaving the buildings alone.

As he notes, that turns messy and wasteful when attacking an enemy. Sure, when defending your own areas, you will want to repel anyone that comes near, however, if you are across the map attacking a base, don’t waste time with the other forces that happen to be in the area unless they attack you as well. Really, this could be handled rather well with an influence map that is laid down spreading from your buildings to determine whether you are in a “defensive” zone or not. If you are, attack anyone that comes in. If not, stick to the plan.

The AI seems to wait until this initial attack force is nearly all dead before entering the next phase: It builds defenses and another attack force. Again, this force is sent out. Once gone, the AI begins trying to build its first expansion base.

The rest of the game is a series of escalating attack waves. As they add more buildings onto their main base they work their way up to air units,

More evidence of a strictly scripted/rule-based system. There are obvious triggers here. If this is truly the case, this method could fall on its face in a number of ways.

For example, if the AI’s attack force survives a battle with just enough units to NOT trigger the next build, but the force it defeated is now in the next build (because their unit count dropped below the threshold), the AI actually starts to fall behind. An exploit would be to NOT kill the AI’s units off but just run them around the map for a while. You can build at will, but the AI is still waiting for the trigger to happen.

One interesting point that he made was regarding the difference in how the AI uses Protoss Templars vs. a human compared to vs. another AI. Check this out…

When I fight the AI Protoss, it uses the Templar Psi Storm with murderous efficiency. I’ll have a tight group of units moving into its base when a Templar will appear in juuuust the right spot, drop a Psi Storm on my guys, and dart away before I can punish him for it. With a couple of Templar available he’ll play hit-and-run with me all day, and do tremendous, infuriating damage to my forces.

Against other AI, the Templar are bumbling comic goofs. They will drop Psi Storm on single enemy units and hit a bunch of their own guys in the process. They will blunder through fortified territory attempting to reach a unit deep inside, and get cut down before they even get close.

I’ve come to suspect that the AI cheats a bit and detects clusters of units which have been grouped by hotkey by human players. This is very naughty if it’s true. What’s worse is that peeking at how my hotkeys are set up seems to be central to its decision making. Deprived of that bit of cheating info, the Templar is helplessly stupid. Boo. (empahsis mine)

Now THAT is interesting. Rather than write up some sort of grouping algorithm or process a very dynamic influence map, use something that the human player is going to likely use. If he doesn’t use hot keys on his groups, he’s a n00b anyway and could stand the help of an ineffectual AI. It would be interesting to try and play without hot-keying your forces and see if the AI Protoss can use those Templars effectively or not.

His complaints about how poor the AI is at playing Terrans are well-founded:

I’m note sure why the thing is so bad at utilizing Terrans. Aside from the issues I mention above, it just seems less aggressive overall. It also has a penchant for building base defenses (bunkers, towers) in places where a base should go, effectively rendering a viable expansion useless. It will attempt to lunch nukes without bothering to cloak the Ghost first. It will risk the painfully expensive Science Vessel in order to irradiate something of very low strategic value. It makes small numbers of all units instead of focusing on a few and using them well.

All of the above could be solved with some mathematical analysis of the units, the situations, the terrain, etc. It’s a shame that wasn’t done.

Now… keep in mind, this game was made 10 years ago. RTS AI was stagnant for the most part during that period. Age of Empires and Age of Kings had horribly inconsistent AI as well. It is my (less than perfect knowledge) opinion that RTS AI didn’t turn the corner until Empire Earth (Stainless Steel Studios) in 2001.

Anyway, Shamus’ project is fantastic and his analysis was very pleasant to read. I may have to find my Starcraft CD so I can download the zip file. I would be very interested to watch through these games and see what else I could see.

Add to Google Reader or Homepage




Content 2002-2018 by Intrinsic Algorithm L.L.C.

OGDA