An article was recently brought to my attention. The first one I looked at wasÂ Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.
The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,
Using a computer model, researchers had each virtual â€śwolfâ€ť follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.
The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980′s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.
The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.
We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.
Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.
So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!
Total War creator, The Creative Assembly, has announced the development of the latest in the line of acclaimed RTS games, Shogun 2. While the Total War franchise has a 10-year history and is fairly well-known for its AI, Â this blurb from their web site has spread through the web like an overturned ink well:
Featuring a brand new AI system inspired by the scriptures that influenced Japanese warfare, the millennia old Chinese â€śArt of Warâ€ť, the Creative Assembly brings the wisdom of Master Sun Tsu to Shogun 2: Total War. Analysing this ancient text enabled the Creative Assembly to implement easy to understand yet deep strategical gameplay.
Sun Tzu‘s “The Art of War” has been a staple reference tome since he penned it (or brushed it… or whatever) in the 6th century B.C. It’s hard to find many legends that have made it for over 20 centuries. Its applications have been adapted in various ways to go beyond war to arenas such as business and politics. Suffice to say that “The Art of War” lives on as “things that just make sense”.
The problem I have here is that this seems to be more of a marketing gimmick than anything. After all, most of what Sun Tzu wrote should, in various forms, already be in game AI anyway. Â To say Sun Tzu’s ideas are unique to him and would never have been considered without his wisdom is similar to saying that no one thought that killing was a bad idea until Moses wandered down the hill with “Thou Shalt Not Kill” on a big ol’ rock. No one stood around saying, “Gee… ya think?” Likewise, Sun Tzu’s advice about “knowing your enemy” is hardly an earth-shattering revelation.
Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War.
Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War. Things like “staying in cover and waiting for the enemy to attack you” come to mind. Of course, in the game world, we call that “camping” (as an individual) or “turtling” (as a group). I can imagine a spirited argument as to whether a camping/turtling AI is necessarily What Our Players Wantâ„˘, however. It certainly beats the old “Doom model” of “walk straight towards the enemy”.
And what about the Sun Tzu concept of letting your two enemies beat the snot out of each other before you jump in? (I believe there are translations that yielded “dog shit” rather than “snot” but the meaning is still clear.) If you are in an RTS and one enemy just sits and waits for the other one whack you around a little bit, it’s going to look broken. On the other hand, I admit to doing that in free-for-all Starcraft matches… because it is a brutal tactic!
The problem I have with their claim is that we already do use many of his concepts in game AI.
The problem I have with their claim, however, is that there are many concepts in the Art of War that we already do use in game AI. By looking at Sun Tzu’s chapter headings (or whatever he called them) we can see some of his general ideas:
For ease of reference, I pillage the following list from Wikipedia:
Laying Plans/The Calculations
Waging War/The Challenge
Attack by Stratagem/The Plan of Attack
Weak Points & Strong/Illusion and Reality
Maneuvering/Engaging The Force
Variation in Tactics/The Nine Variations
The Army on the March/Moving The Force
The Attack by Fire/Fiery Attack
The Use of Spies/The Use of Intelligence
Going into more detail on each of them, we can find many analogues to existing AI practices:
Laying Plans/The Calculations explores the five fundamental factors (and seven elements) that define a successful outcome (the Way, seasons, terrain, leadership, and management). By thinking, assessing and comparing these points you can calculate a victory, deviation from them will ensure failure. Remember that war is a very grave matter of state.
It almost seems to easy to cite planning techniques here because “plans” is in the title. I’ll go a step further then and point out that the practice of collecting information and assessing the relative merits of the selection, you can determine potential outcomes or select correct paths of action. This is a common technique in AI decision-making calculations. Even the lowly min/max procedure is, in essence simply comparing various potential paths through the state space.
Waging War/The Challenge explains how to understand the economy of war and how success requires making the winning play, which in turn, requires limiting the cost of competition and conflict.
This one speaks even more to the min/max approach. The phrase “limiting the cost of competition and conflict” expresses the inherent economic calculations that min/max is based on. That is, I need to get the most bang for my buck.
Attack by Stratagem/The Plan of Attack defines the source of strength as unity, not size, and the five ingredients that you need to succeed in any war. In order of importance attack: Strategy, Alliances, Army, lastly Cities.
Any coordinating aspects to the AI forces falls under this category. For example, the hierarchical structure of units into squads and ultimately armies is part of that “unity” aspect. Very few RTS games send units into battle as soon as they are created. They also don’t go off and do their own thing. If you have 100 units going to 100 places, you aren’t going to have the strength of 100 units working as a collection. Â This has been a staple of RTS games since their inception.
Tactical Dispositions/Positioning explains the importance of defending existing positions until you can advance them and how you must recognize opportunities, not try to create them.
Even simply including cover points in a shooter game can be thought of as “defending existing positions”.
Even simply including cover points in a shooter game can be thought of as “defending existing positions”. More importantly, individual or squad tactics that do leapfrogging, cover-to-cover, movement is something that has been addressed in various ways for a number of years. Not only in FPS games do we see this (e.g. F.E.A.R.), but even in some of the work that Chris Jurney did originally in Company of Heroes. Simply telling a squad to advance to a point didn’t mean they would continue on mindless of their peril. Even while not under fire, they would do a general cover-to-cover movement. When engaged in combat, however, there was a very obvious and concerted effort to move up only when the opportunity presented itself.
This point can be worked in reverse as well. The enemies in Halo 3, as explained by DamiĂˇn Isla in his various lectures on the subject, defend a point until they can no longer reasonably do so and then fall back to the next defensible point. This is a similar concept to the “advance” model above.
Suffice to say, whether it be advancing opportunistically or retreating prudently, this is something that game AI is already doing.
Energy/Directing explains the use of creativity and timing in building your momentum.
This one is a little more vague simply because of the brevity of the summary on Wikipedia. However, we are all well aware of how some games have diverged from the simple and stale “aggro” models that were the norm 10-15 years ago.
Weak Points & Strong/Illusion and Reality explains how your opportunities come from the openings in the environment caused by the relative weakness of your enemy in a given area.
Identifying the disposition of the enemy screams of influence mapping…
Identifying the disposition of the enemy screams of influence mappingâ€”something that we have been using in RTS games for quite some time. Even some FPS and RPG titles have begun using it. Influence maps have been around for a long time and their construction and usage are well documented in books and papers. Not only do they use the disposition of forces as suggested above, but many of them have been constructed to incorporate environmental features as Mr. Tzu (Mr. Sun?) entreats us to do.
Maneuvering/Engaging The Force explains the dangers of direct conflict and how to win those confrontations when they are forced upon you.
Again, this one is a bit vague. Not sure where to go there.
Variation in Tactics/The Nine Variations focuses on the need for flexibility in your responses. It explains how to respond to shifting circumstances successfully.
This is an issue that game AI has not dealt with well in the past. If you managed to disrupt a build order for an RTS opponent, for example, it might get confused. Also AI was not always terribly adaptive to changing circumstances. To put it in simple rock-paper-scissors terms, if you kept playing rock over and over, the AI wouldn’t catch on and play paper exclusively. In fact, it might still occasionally play scissors despite the guaranteed loss to your rock.
Lately, however, game AI has been far more adaptive to situations. The use of planners, behavior trees, and robust rule-based systems, for example, has allowed for far more flexibility than the more brittle FSMs allowed for. It is much harder to paint an AI into a corner from which it doesn’t know how to extricate itself. (Often, with the FSM architecture, the AI wouldn’t even realize it was painted into a corner at all and continue on blissfully unaware.)
The Army on the March/Moving The Force describes the different situations inf them.
[editorial comment on the above bullet point: WTF?]
I’m not sure to what the above refers, but there has been a long history of movement-based algorithms. Whether it be solo pathfinding, group movement, group formations, or local steering rules, this is an area that is constantly being polished.
The Attack by Fire/Fiery Attack explains the use of weapons generally and the use of the environment as a weapon specifically. It examines the five targets for attack, the five types of environmental attack, and the appropriate responses to such attack.
For all intents and purposes, fire was the only “special attack” that they had in 600 BC. It was their BFG, I suppose. Extrapolated out, this is merely a way of describing when and how to go beyond the typical melee and missile attacks. While not perfect, actions like spell-casting decisions in an RPG are not terribly complicated to make. Also, by tagging environmental objects, we can allow the AI to reason about their uses. One excellent example is how the agents in F.E.A.R. would toss over a couch to create a cover point. That’s using the environment to your advantage through a special (not typical) action.
The Use of Spies/The Use of Intelligence focuses on the importance of developing good information sources, specifically the five types of sources and how to manage them.
The interesting point here is that, given that our AI already has the game world at its e-fingertips, we haven’t had to accurately simulate the gathering of intelligence information. That has changed in recent years as the technology has allowed us to burn more resources on the problem. We now regularly simulate the AI piercing the Fog of War through scouts, etc. It is only a matter of time and tech before we get even more detailed in this area. Additionally, we will soon be able to model the AI’s belief of what we, the player, know of its disposition. This allows for intentional misdirection and subterfuge on the part of the AI. Now that will be fun!
Claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading…
Anyway, the point of all of this is that, while claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading, much of what he wrote of we as game AI programmers do already. Is there merit in reading his work to garner a new appreciation of how to think? Sure. Is it the miraculous godsend that it seems to be? Not likely.
In the mean time, marketing fluff aside, I look forward to seeing how it all plays out (so to speak) in the latest Total War installment. (Looks like I might get a peek at E3 next week anyway.)
This is something that I have been seeing for a long time now. I’m sure we all have… and for good reason: A lot of times it is true. Despite what I said in my 7 minutes at the AI Devs Rant at the 2010 GDC about how reviewers like to bitch about bad AI,Â unfortunatelyÂ too often it is justified. The juxtaposition of an otherwise good game in other areas with poorly executed AI development is a bit more tragic, however. That doesn’t point to a case of a smaller budget game. It’s an example of a well-funded project with either a priority or a talent problem.
The storyline is thinly tied together and barely cohesive.
OK… admittedly he gripes about the storyline. A lot of times that simply because you are doing a sequel of an established IP. Moving on… (emphasis mine):
The graphics are beautiful, and the environments are varied. Lush jungle or swamp areas appear suddenly in the midst of the glacial ice fields, with waterfalls and towering trees, and then there’s parched desertscapes or weather-battered coastal regions.
Not only is the landscape new and varied, but the Akrid, the natural inhabitants of the planet E.D.N. III, are back with new shapes, new designs and a whole new set of (usually grumpy) attitudes. The scale of the largest of the creatures, the “Cat-G type” is impressive to say the least. Like,Â God of War III-scale impressive.
The voice acting, for the most part, is well-done and up to the task. The music itself, however, is excellent. Swelling orchestral pieces accentuate the action sequences, and give the game an epic feel that would have been missing if they had used some “generic rock-style track #2″ soundtrack. Well done, Capcom.
Ok… so we have this love fest on the environment, the character modeling, the voice acting, and the soundtrack (he even mentions later that he would love to have a CD of the soundtrack). So what can possibly be wrong? Here’s a montage (again, emphasis mine):
The game is designed from the ground up to take advantage of four-player cooperative play. And heaven help you if you don’t have friends to play the game with. As 1UP.comÂ states, “Brain-dead, unhelpful, and unresponsive, the computer-controlled team members are a liability rather than a resource.”
You truly, truly need a human companion or three to completely appreciate whatÂ Lost Planet 2 has to offer. For example, during one big boss battle, there are four separate tasks that need to be completed simultaneously. With four humans working together, this bit of teamwork wouldn’t be too difficult. Unfortunately, if you’re playing solo with AI teammates, you’re pretty much left to a snarled tangle of frustration and trial-and-error.
The level design is adequate, but I think that too much emphasis was put on the multiplayer portion, and not enough consideration for the solo player who will be reduced to using the criminally stupid AI companions.
Damn… so we have a Left 4 Dead-style game that is based around the idea that you have to cooperate with your teammates in order to not only survive but to actually complete mandatory parts of the campaign… and yet they don’t provide you with the companions that can do so.
Early on we were all letting our enemies die quickly because we lacked the capability to make them smarter.
In the era of single-player, shooting gallery-style games, having sub-par AI wasn’t too bad. After all, our fallback mantra was “the enemy won’t live long enough to show off his AI anyway.” I knew that was just a bad crutch when we were all saying it. The truth is, early on we were all letting our enemies die quickly because we lacked the capability to make them smarter! We were actually relieved that our characters were dying quickly. That managed to fit well with our other AI mantra: “Don’t let the AI do anything stupid.” Unfortunately, the chance of the AI doing something stupid rose exponentially with the amount of time that it was visible (and alive).
We are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore.
Now that we are expecting AI teammates or squadmates or companions to come along for the ride, we have a much harder challenge. (Back in 2008, I wrote about this in my (at the time)Â regular column on AIGameDev in an article titled “The Art of AI Sidekicks: Making Sure Robin Doesn’t Suck“.) The problem is, we are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore. Certainly, it takes more money and time to develop really good AI than it does to do a soundtrack, (I can speak to this, by the way… I was a professional musician a long time ago and am perfectly comfortable with anything fromÂ writing, arranging, and recording multi-track electronic grooves to penning entire sweeping orchestral scores. But that was in a previous life.) but it seems like a little effort might be called for. After all, the necessity of multi-player was built into the game design from the start… the necessity of a lush soundtrack was not.
To the defense of game companies, however, I’m very aware the good AI people are exceedingly and increasingly hard to find. The focus of the industry has changed in the past few years so that companies are trying to do better. However, that often means a lot more AI-dedicated manpower than they have. With many companies trying to find AI people all over the place, the demand has really out-stripped the supply. Some companies have had ads up for AI programmers for 6 to 9 months! They just aren’t out there.
So it isn’t always that the company doesn’t care or won’t spend the money on it. It’s often just the fact that AI is a very difficult problem that calls for a very deep skill set. Unfortunately, most of the game programs that exist really don’t even address game AI beyond “this is a state machine”. Academic AI programs are good for “real world AI” but don’t apply to the challenges that the game industry needs. Unfortunately, many academic AI institutions and their students don’t know this until they are rebuffed for suggesting very academia-steeped techniques that will fall flat in practice. (And no, a neural network would not have saved the AI in Lost Planet 2.)
So… in the mean time, here’s the suggestion: If your people don’t have the chops to make the required minimum AI, don’t design a game mechanic that needs that AI.
Of the top 5, I’m the most excited about an increase in sandbox games and emergent behaviors. Really, I see these two as almost interlinked. Sandbox games not only allow emergent behavior to proliferate – they almost require it to do so in order to keep immersion.
Likewise, interagent cooperation was another of the top 5 on the list. Again, this is something that I see as related to emergent behavior. If you leave your cooperation loosely defined rather than pre-scripted, you will see a lot of emergent behavior as a result.
I hope to get more a feel about this very topic at the GDC roundtables and lectures next month. That is always a great way to take the pulse of the industry. Anyway, good stuff on the list.
I’ve been watching with interest as some of the recent game reviews have been (briefly) touching on the AI of the games. As a brief aside, it’s annoying that a reviewer can spend 2 paragraphs on the purty graphics and 2 sentences on the AI. Yet, you could shave 20% of the graphics quality and it wouldn’t hurt the gameplay experience too much. Add 20% to the AI and it may affect the gameplay a significant amount. Keep that in mind as you read these excerpts.
Matters aren’t helped by a number of other irritations. The unit pathfinding is dim-witted, with units frequently getting stuck, taking the long way round or sometimes just rotating in place.
That doesn’t sound promising at all. This is sad, too. The original Empire Earth was very noteworthy because of its AI. In fact, those in game AI circles know full well that many a whitepaper was written on some of the techniques that they used in the game. That being said, Mad Doc software, current custodian of the EE franchise, is headed up by Dr. Ian Lan Davis who has a doctorarte from Carnegie Mellon University in Artificial Intelligence and Robotics. That doesn’t seem to jive with the above observation about the unit AI. I haven’t played EE2 or EE3, but I remember hearing similar comments about EE2. I may just pick them both up to observe.
That is, when youâ€™re not hiding in the shadows trying to be Sam Fisher or Solid Snake. Stealth tactics play an important role in the game, but in a gory game like Manhunt 2, this feels a little out of place. I mean, itâ€™s obvious Daniel doesnâ€™t want to get caughtâ€¦ It also doesnâ€™t help that the A.I. is as dumb as a sack of bricks at times, meaning that if you just slightly slip into the shadows, itâ€™s as if youâ€™ve disappeared off the face of the Earth. And really, when youâ€™ve got a bunch of people hunting you down to silence you, youâ€™d figure they might try a little harder.
I thought this was interesting. For the past 4 or 5 years there has been some interest in sensory systems such as were exhibited in Theif and Splinter Cell. It seems (just from this quote) that Manhunt 2 is trying to do that as well. However, while analyzing light vs. shadows is an important aspect of stealth work, that is only part of the solution. Where the developers may have stopped short is building in a bit of “urgency momentum” into their AI agents. That is, if you are truly searching for someone, you are specifically going to check the dark areas.
This is something that it looks like Rockstar wanted to address… at least according to this review:
Coupled with the enhanced AI of hunters in Manhunt 2 will be the fact that even in the darkest of shadows, your character will not be totally safe. In the original Manhunt, often you found yourself safely concealed in darkness inches away from a hunter, which was a tad unrealistic. This is a thing of the past with Manhunt 2 as now, even with total darkness and complete stillness, hunters can still detect you and drag you out into the open for the kill.
Note that the above is future tense – that is what Rockstar said they were going to do. Take that with a large block of salt coming from any studio. Continuing from the same review however, is an interesting caveat…
The way Manhunt 2 will portray this is via a mini game of sorts – if you’re well hidden in darkness but a hunter suspects your presence, you will have to successfully complete a series of button presses to avoid detection. While this is not exactly the most innovative idea around, it should keep you on your toes knowing that no matter where you are, in Manhunt 2, you are never completely safe. Of course, detection won’t necessarily mean game over, but it will definitely make survival much harder.
The enemy AI is also mostly competent, working as a team and retreating when injured. However, tangos usually leave part of their body exposed when hiding behind cover, meaning skirmishes are seldom a lengthy affair.
This looks generally good news. It’s always nice to see group behaviors rather than simply a collection of individual ones. I wish I knew what “working as a team” means, however. A lot of that can be faked. Are they providing cover fire while others are moving? Are they flanking you? There’s so much unsaid there.
Retreating when injured is such a simple thing to do – but I give it kudos because it’s an obvious – almost instinctual – behavior that designers seem to leave out more often than not.
The issue of using cover is a little harder to pull off properly. But I don’t want to fault the AI programmers on something that may very well have been a designer decision rather than the inability of the programmers. I will leave that alone.
But also, gone are the days of stupid zombies. The AI in the game is very well done. They will coordinate attacks, throw things at you, position themselves to corner you, basically all the things that can be done in real life.
This is in the similar vein as Kane & Lynch above. What do they mean by “coordinate attacks?” Still, this is promising to hear about. I also like the idea of the AI “cornering you”. If that’s true, that shows that they are doing some analysis of the “terrain” to determine either choke points or exit routes. I would like to see if this is the case.
I left the last sentence in there for a reason – “all the things that can be done in real life.” While I doubt that is the case and chalk it up to oversimplification by the author, I do like the idea. After all, isn’t that what we are looking for in our game AI? Isn’t that the whole point?
Let’s hope we keep moving forward… and maybe eventually we will get there. I will do my part!