I was browsing through my Google alerts for “Game AI” and this jumped out at me. It was a review of the upcoming Ghost Recon: Future Soldier on Digital Trends (who, TBH, I hadn’t heard of until this). The only bit about the AI that I saw was the following paragraph:
The cover system is similar to many other games, but you canâ€™t use it for long. The environments are partly destructible, and hiding behind a concrete wall will only be a good idea for a few moments. The enemy AI will also do everything it can to flank you, and barring that, they will fall back to better cover.
There is a sort of meta-reason I find this exciting. First, from a gameplay standpoint, having enemies that use realistic tactics makes for a more immersive experience. Second, on the meta level is the fact that this breaks from a meme that has been plaguing the industry for a while now. Every time someone suggested that enemies couldâ€”and actually shouldâ€”flank the player, there was a rousing chorus of “but our players don’t want to be flanked! It’s not fun!”
“but our players don’t want to be flanked!”
This mentality had developed a sort of institutional momentum that seemed unstoppable for a while. Individuals, when asked, thought it was a good idea. Players, when blogging, used it as an example of how the AI was stupid. However, there seemed to be a faceless, nebulous design authority that people cited… “it’s not how we are supposed to do it!”
What are we supposed to do?
One of the sillier arguments I heard against having the enemy flank the player and pop him in the head is that “it makes the player mad”. I’m not arguing against the notion that the player should be mad at this… I’m arguing against the premise that “making the player mad” is altogether unacceptable.
“…it makes the playerÂ mad.”
In my lecture at GDC Austin in 2009 (“Cover Me! Promoting MMO Player Interaction Through Advanced AI” (pdf 1.6MB), I pointed out that one of the reasons that people prefer to play online games against other people is because of the dynamic, fluid nature of the combat. There is a constant ebb and flow to the encounter with a relatively tight feedback loop. The enemy does something we don’t expect and we must react to it. We do something in response that they don’t expect and now they are reacting to us. There are choices in play at all times… not just yours, but the enemy’s as well. And yes, flanking is a part of it.
It builds tension in my body that is somewhat characteristic of combat.
In online games, if I get flanked by an enemy (and popped in the head), I get mad as well… and then I go back for more. The next time through, I am a little warier of the situation. I have learned from my prior mistake and am now more careful. It builds tension in my body that, while having never been in combat, I have to assume is something that is somewhat characteristic of it. Not knowing where the next enemy is coming from is a part of the experience. Why not embrace it?
Something to fall back on…
One would assume some level of self-preservation in the mind of the enemy…
The “fall-back” mechanic is something that is well-documented through DamiĂˇn Isla’s lectures on Halo 3. It gives a more realistic measure of “we’re winning” than simply mowing through a field of endless enemies. Especially in human-on-human combat where one would assume some level of self-preservation in the mind of the enemy, having them fall back instead of dying mindlessly is a perfect balance between the two often contradictory goals of “survival” and “achieving the goal”. It is this balance that makes the enemy feel more “alive” and even “human”.
If enemies simply fight to the death, the implication is that “they wanted to die“.
Often, if enemies simply fight to the death, the implication is that “they wanted to die”. Beating them, at that point, is like winning against your dad when you were a kid. You just knew that he was letting you win. The victory didn’t feel as good for it. In fact, many of us probably whined to our fathers, “Dad! Stop letting me win! I wanna win for real!” Believe it or not, on a subconscious level, this is making the player “mad” as well.
They want to win but you are making them choose to live instead!
By given our enemies that small implication that they are trying to survive, the player is given the message that “you are powerful… they want to win but you are making them choose to live instead!”
Here’s hoping that we can actually move beyond this odd artificial limitation on our AI.
I was looking at my daily barrage of Google alerts on “game AI” (which tend to contain an annoying number of links to stories about Alan Iverson) and this article blurb from the Bitbag happened to catch my eye. It’s a preview of Halo Reach and seems to be a fairly thorough treatment. They talk about a lot of the different gameplay elements and how they differ from prior games in the franchise. They go into great detail about a lot of things. There was only a little bit of info about the AI, however. It said:
Bungie wants this game to feel a lot like Combat Evolved. They want Reach to be filled with open environments filled with enemies and allow you to figure out how you want to deal with the situation. There will be corridor battles like what weâ€™ve seen in past Halos, but that will be balanced with the terrain of Reach. Reach will have a full weather system as well as Bungie saying they will have â€ś40 AI and 20 vehiclesâ€ť on screen at a time.
I thought that was kind of interesting simply because my first reaction was “is that all?” After a moment’s reflection, I realized that the number of AI on the screen in prior Halo games – and even in other shooters – is usually along the lines of a dozen… not 2 score.
On the other hand, it a game like Assassin’s Creed (my Post-Play’em observations), there were plenty of AI on-screen. However, the majority of them were just the citizens who, for the most part, weren’t doing much beyond animation and local steering (until you engaged them for some reason). The question about Bungie’s promise above, then, is how much level of detail scaling will there be with those 40 on-screen AI characters?
Typical LOD scaling has a tiered system such as:
Directly engaged with the player
On-screen but not engaged
Off-screen but nearby
On-screen but distant
Off-screen and distant
Each of those levels (in order of decreasing AI processing demand) has a list of things that it must pay attention to or a different timer for how long to wait between updates. How much of this is Bungie tapping into with Reach, or all they all running at the same LOD?
I know that the AI guys at Bungie are pretty sharp and go out of their way to pull of nifty stuff. In fact, ex-Bungie AI lead, DamiĂˇn Isla just did an interview with AIGameDev on blackboard architectures (my writeup) where he explained how a lot of the Halo 2 and 3 AI was streamlined to allow for better processing of many characters. I’m quite sure that much of that architecture survives in Halo Reach.
Anyway, I just thought it was interesting to see the promised numbers. It will be even more interesting to see how the marketing blurb actually plays out on the screen in the final product.
Damian Isla of Bungie spoke at the recent Develop conference in the UK. He covered a lot of the history of Halo and some of the design decisions that were made in the franchise. Here’s a story from Gamasutra that covers a lot of good stuff.
Specifically, there’s a couple of things I want to touch on.
Halo’s designers wanted the title’s gameplay to explore mankind’s “primal games” such as hide and seek, tag, and king of the hill, and the game’s encounters were created with them in mind.
“It’s evolution that taught us these primal games,” said Isla. “They’re the ones that are played with our reptilian brains. The idea was for the AI [to] play them back with you.”
That’s kind of interesting from a design standpoint. I guarantee that no one is sitting there thinking “hey, this is like King of the Hill” but they all recognize the concept on a subconcious level.
Isla pointed out that the importance of territory in Halo’s encounter design is closely connected to the recharging shield mechanic that has appeared since the original game.
“Part of that recipe demands that at some point you have a safe zone,” he explained. “In a sense we needed to make the AI territorial. Once you have this idea, you have to think about the problem of encounter progression as the player expands their safe zone. That itself is a pretty fun process. It gives the player a sense of progress, and is extremely plannable.
This makes a heckuva lot more sense than the “arena + safe corridor + arena…” model. What Halo did was break it up theoretically rather than physically (i.e. with walls). However, there still was the knowledge that the dudes – while still in their territory – were still going to try to take pot shots at you. You could take cover and they weren’t necessarily going to come get you, but it wasn’t completely safe.
Isla made special mention of AI misperception — “the most interesting form” of good AI mistakes. If the player moves stealthily, the AI will assume the player is still sitting where the AI last knew him to be. [snip] “Each AI has an internal model of each target, and that model can be wrong,” Isla summarized. “This allows the AI to be surprised by you, and this is very fun.”
Amen, brother! This is something that I love seeing. I remember reading some of Damian’s papers in the AI Wisdom series on exactly this concept of unknown location and search. Good stuff, man!
Still, Isla stressed, enemies shouldn’t be dumb. “It’s more fun to kill an enemy that’s cunning, formidable, and tactical,” he said, pointing out that that goal is not just an AI problem but also related to convincing animation and game fiction.
Dude… have I told you I loved you? I’m so sick of the mantra of “AI shouldn’t be smart, it should be fun!” As if those two are mutually exclusive of each other.
“In Halo 2, if an AI tips over his vehicle, he walks off and forgets completely he was ever in one,” said Isla. “In Halo 3, if he tips it, he remains in its vicinity fighting until there is a point where he can right it again.”
According to Isla, the latter approach is “the way things should be going” — as he puts it, “behavior should be a very thin layer on top of a world of concepts.”
I would argue that behavior is more than a thin layer. Otherwise, I agree. Which really brings the concept of knowledge representation to the forefront. Not just world representation (e.g. geometry), but a general concept of how agents perceive and conceptualize things (i.e. psychology). Again, I’ve read some of Damian’s papers on the subject. To me, he is someone who “gets it”.
Got linked to this by Paul Tozour. Here are Damian Isla’s (Bungie) slides from his 2005 AIIDE presentation on “spatial competence” entitled “Dude, where’s my Warthog?” It includes info on a ton of the stuff he/they did in Halo 2. Included is information on pathfinding – especially with regard to how we (as people) process spatial information. It’s nice to see someone else tapping into psychology as a source for potential solutions for game AI.
I’m in the process of uploading all my GDC-related things to this page. You can actually listen to my audio of the 3 AI roundtables and read my (barely comprehensible) notes that I furiously took during each. Also, it has links to the pictures that I took during the roundtables and the AI Programmers Dinner on Friday night.
On that page, I will also be posting other AI-related tidbits such as my notes from lectures such as those by Soren Johnson’s (Civ 4), Damian Isla (Halo 3), and Peter Molyneux (Fable 2). Give me a few days to get it all straightened out, though.
Also, I sat down with John Abercrombie of 2k-Boston on Sunday morning and spoke with him about the AI that he did for Bioshock. That should be posted on Wednesday. Look for it over on Post-Play’em.
(Remember to tap the RSS feed to keep up with these additions and all other AI-related things.)
One final note about GDC… it’s always an exhilarating week… but it sure does make my head hurt!
I was browsing around on various AI sites, and I came upon a link to a presentation titled “The Illusion of Intelligence: The Integration of AI and Level Design in Halo” that was given at the Game Developer’s Conference in 2002 by Chris Butcher and Jaime Griesemer of Bungie. That was the year of my first GDC, but I can’t remember if I made it to this session or not. I tried to hit every AI session there was, but sometimes there were conflicts. I actually probably have my 2002 GDC stuff around here with the session list and map – I bet I checked it off. Enough musing about my conference history, however.
The presentation is an interesting in that it provides a peek into some of the design mindsets of the developers. They fully admit that AI cheating and “faking it” is a viable methodology – and that the player will usually buy it because they want to.
Like it or hate it, Halo was and is a prominent fixture in the game world – and this presentation gives a great peek inside for AI developers and players alike. Since it was a GDC lecture, there isn’t any serious code to wade through. Because of that, it’s something that can be digested by a wide audience with only the occasional stumble over esoteric industry terminology. It’s a good read.