I admit that, despite it being 11/11/11, I haven’t played Skyrim. I don’t even know if I will be able to get to it for a few weeks. However, that doesn’t stop the barrage of information from people playing it. I am trying to avoid most of the breathy reports from my friends and colleagues around the internet. However, this one kept popping up on my Twitter feed so I figured I would take a look.
The title of this YouTube video is “How to steal in Skyrim.” When I clicked on it, I really didn’t know what to expect. I figured it was going to either be a boring instructional video or a blooper reel. I suppose it goes in both categories, for whatever that’s worth. However, it is an instructional video for game AI developers and designers alike.
What you are seeing is an attempt by Bethesda to mitigate a problem that has plagued RPGs since their inception — that of rampant stealing from houses and shops. Usually, one can do this right in front of people and no one seems to care. One poor solution was to mark objects as being people’sÂ possessions and alert that person when they are stolen. However, that pretty much eliminates the notion that you could steal something when that person is not around…. kind of a “page 1″ lesson in the Book of Stealing, really.
Every person in the game is completely cool with me just placing large objects over their head?
What Bethesda obviously did here is perform a line-of-sight check to the player. If the player grabs something that isn’t normal, react to it. In the case of the lady at the table, she simply queries the player about what he is doing. In the case of the shopkeeper, he reacts badly when the player takes something of his. All of this is well and good.Â However, when the line of sight is blocked (in this case by putting something over their heads), they can no longer see the player taking something and, therefore, don’t react.
But what about the reaction that should take place when you put a bucket over the person’s head? Are you telling me that every person in the game is completely cool with me just placing large objects over their head? It certainly looks that way!
The lesson here is that we either can’t think of every possible action the player could perform in the game or we simply do not have the resources to deal with it — for example, by having the player protest and remove the ill-designed helmet.
“But why would the player want to do that?”
In the past (and I mean >10 years ago), when the player’s interaction with the world was simpler, many of the faux pas would have stemmed from the former reason. We just didn’t bother to think about the possible actions. The pervasive mentality was simply, “but why would the player want to do that?” Of course, players did do things like that — but given the limited worlds that we existed in, the ramifications weren’t that huge. We were far enough away from the proverbial “uncanny valley” that we simply accepted that the simulation didn’t model that sort of thing and we moved on.
Adding one mechanic to the game could have hundreds of different possible applications.
More recently, as games have allowed the player even more interaction with the world, there is a necessary exponential explosion of possible results for those actions. That is, simply adding one mechanic to the game could have hundreds of different possible applications. When you figure that game mechanics can be combined so as to intersect in the world, the potential space of resulting interactions is mind-numbingly complex. The problem then becomes, how do I account for all of this as a game developer?
One game that began simulating this stuff on an almost too deep level was Dwarf Fortress. I admit going through a DF kick last spring and it almost killed me. (It was like experimenting with powerful drugs, I suppose.) Pretty much everything in that world interacts with everything else in a realistic way. The rulebase for those interactions is spectacular. However, pretty much the only way they can pull it off is because their world is represented iconically rather than in the modern, 3D, photo-realistic, way. For DF, creating a new visual game asset is as simple as scouring the text character library for something they haven’t used yet and making a note of the ASCII code. In Skyrim (and all modern games of its ilk), the process of creating an asset and all its associated animations is slightly more involved. Or so the rumor would have it.
Given the example in the video above, DF (or any other text-based game) could simply respond, “the lady removes the bucket and yells obscenities at you.” Problem solved. In Skyrim, they would specifically have to animate removing things from their head and hope their IK model can handle grasping the bucket no matter where the physics engine has placed it.
What occurred in the video isn’t necessarily a failing of AI.
So there’s the problem. What occurred in the video isn’t necessarily a failing of AI. We AI programmers could rather simply model something like, “your messing with [my body] and I don’t like it.” It just wouldn’t do us a lot of good if we can’t model it in an appropriate way in-game.
This bottleneck can apply to a lot of things. I could represent complex emotional states on finely graduated continua, but until the animation of facial expressions and body language can be modeled quickly and to a finer degree of realism, it doesn’t do anyone any good. No one will ever see that the character is 0.27Â annoyed, 0.12fearful, and 0.63 excited.
In the meantime, rest assured that the hive mind of the gaming public will think of all sorts of ways to screw with our game. Sure, they will find stuff that we haven’t thought of. It’s far more likely, however, that we did think of it and we wereÂ simply unable to deal with it given the technology and budget constraints.
And to the gaming public who thinks that this is actually a sign of bad AI programming? Here… I’ve got a bucket for you. It looks like this:
An article was recently brought to my attention. The first one I looked at wasÂ Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.
The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,
Using a computer model, researchers had each virtual âwolfâ follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.
The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980’s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.
The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.
We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.
Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.
So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!
Back in November, there was a get-together of Boston Post Mortem (billed as “games and grog, once a month”) that had a panel of local AI folks. The panelist’s names are familiar to many of us… DamiÃ¡n Isla, Jeff Orkin, and John Abercrombie. It was moderated by Christian Baekkelund whom I had the opportunity to have dinner with in Phily when I was in town for the GameX Industry Summit. Thankfully, this panel was recorded by Darius Kazemi and posted on his Vimeo site and on the Boston Post Mortem page. I embed it here for simplicity’s sake.
The first question to the panel was “what do new AI developers do wrong” or something to that effect. DamiÃ¡n set up the idea of two competing mentalities… gameplay vs. realistic behavior. He and John both supported the notion that the game is key and that creating a system just for the sake of creating it can range anywhere from waste of time to downright wrong.
…create autonomous characters and then let the designers create worlds…
The thing that caught me was Jeff’s response, though (5:48). His comment was that AI teams can’t force designers to be programmers through scripts, etc. That’s not their strength and not their job. While that’s all well and good, it was his next comment that got me cheering. He posited that it is the AI programmers job to create autonomous characters and then let the designers create worlds that let the characters do their thing in.
Obviously, it isn’t a one way street there… the designers job isn’t to show off the AI. However, I like the idea of the designers not having to worry about implementing behavior at all — just asking for it from the AI programmer and putting the result into their world. John’s echo was that it’s nice to build autonomous characters but with overrides for the designers. It isn’t totally autonomous or totally scripted. This sounds like what he told me in his Bioshock experience when I talked to him about it a few years ago.
I happen to agree that the focus needs to be on autonomy first and then specific game cases later. The reason for this is too often the part of the AI that looks “dumb” or “wrong” is when the AI isn’t being told to do anything specific. For example, how often would you see a monster or soldier just standing there? Some of the great breakthroughs in this were from places like Crytek in Far Cry, Crysis, etc. The idea of purposeful-looking idle behaviors was a great boon to believable AI.
The other advantage to creating autonomy first was really fleshed out by Jeff Orkin’s work on F.E.A.R. (Post-Play’em) No more do designers (or even AI programmers) have to worry about telling an agent exactly what it should do in a situation. By creating an autonomous character, you can simply drop them in a situation and let them figure it out on their own. This can be done with a planner like Jeff did, a utility-based reasoner, or even a very detailed behavior tree. Like John said above, all you need to remember is to provide the override hooks in the system so that a designer can break an AI out of its autonomy and specifically say “do this” as an exception rather than hand-specifying each and every action.
What’s in Our Way?
The next question was about “the biggest obstacle” for game AI moving forward. Jeff’s first answer was about authoring tools. This has been rehashed many times over. John expressed his frustration at having to start from scratch all the time (and his jealousy that DamiÃ¡n didn’t have to between Halo 2 and 3).
…to get your AI reviewed well, you need to invest in great animators.
DamiÃ¡n’s comment was amusing, however. He suggested that to get your AI reviewed well and have players say “that was good AI”, you need to invest in great animators. This somewhat reflects my comment in the 2009 AI Summit where I pointed out that AI programmers are in the middle of a pipeline with knowledge representation on one side and animation on the other. It doesn’t do any good to be able to generate 300 subtle behaviors if the art staff can only represent 30 of them.
On the other hand, he reiterated what the other guys said about authoring tools and not having to re-invent the wheel. He supports middleware for the basic tasks like A*, navmesh generation, etc. If we don’t have to spend time duplicating the simple tasks over and over, we can do far more innovation with what we have.
That’s similar to my thought process when I wrote my book, “Behavioral Mathematics for Game AI“. You won’t see a description of FSMs, A*, or most of the other common AI fare in the book. How many times has that been done already? What I wanted to focus on was how we can make AI work better through things other authors haven’t necessarily covered yet. (Ironically, it was Jeff Orkin who told me “I don’t think anyone has written a book like that. Many people need to read a book about that. Heck… I’d read a book about that!” — Thanks Jeff!)
What Makes the Shooter Shot?
The next question (11:45) was about what FPS-specific stuff they have to deal with.
When Halo 3 came out, they could afford fewer raycasts than Halo 2.
DamiÃ¡n talked about how their challenge was still perception models. They really tried to do very accurate stuff with that in Halo. He pointed out that raycasting is still the “bane” of his existence because it is so expensive still. Despite the processors being faster, geometry is far more complex. Alarming note: When Halo 3 came out, they could actually afford fewer raycasts than on Halo 2. Now that sucks! Incidentally, the struggle for efficiency in this area very much relates to DamiÃ¡n’s “blackboard” interview that I wrote about last week.
Interestingly, Jeff argued the point and suggested that cheating is perfectly OK if it supports the drama of the game. I found this to possibly be in conflict with his approach to making characters autonomous rather than scripted. Autonomy is on the “realistic” end of the spectrum and “scripted” on the other. The same can be said for realistic LOS checks compared to triggered events where the enemy automatically detects the player regardless.
John split the difference with the profound statement, “as long as the player doesn’t think you’re cheating, it’s totally OK.” Brilliant.
AI as the Focus or as a Focusing Tool
Supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
In response to a question about what DamiÃ¡n meant about “AI as a game mechanic,” he offered an interesting summation. He said that, from a design standpoint, the AI deciding when to take cover and when to charge is as important as how much a bullet took off your vitality. That is, supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
He also pointed out that the design of a game character often started out with discussions and examples of how that AI would react to situations. “The player walks into a room and does x and the enemy will do y.” Through those conversations, the “feel” of a type of character would be created and, hopefully, that is what the player’s experience of that type of character would end up being.
In my opinion, a lot of this is accomplished by being able to not only craft behaviors that are specific to a type of enemy (the easy way of differentiation) but also parameterizing personality into those agents so that they pick from common behaviors in different ways. That is, something that all enemies may do at some point or another but different types of enemies do at different times and with different sets of inputs. I went into this idea quite a bit in my lecture from the 2009 AI Summit (Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters) where I talked about incorporating personality into game agents.
The Golden Rules of AI (20:30)
Christian started things off by citing the adage, “it’s more important to not look stupid than to look smart.” No big surprise there.
The player feels good about killing someone if the kill is challenging.
John said that the AI must be “entertaining”. My only problem with this is that different people find different things entertaining. It’s kinda vague. Better to say that the AI must support the design. Both John and Jeff extended this idea by talking about providing a challenge… the player feels good about killing someone if the kill is challenging.
DamiÃ¡n sucked up to my buddy Kevin Dill a little bit my citing a quote that he made in our joint lecture at the GameX Industry Summit, The Art of Game AI: Sculpting Behavior with Data, Formulas, and Finesse. Kevin said that AI programmers must be an observer of life. I certainly agree with this notion… in fact, for years, my little tag at the end of my industry bios has said, “[Dave] continues to further his education by attending the University of Life. He has no plans to graduate any time soon.” In fact, Chapter 2 of my book is titled “Observing the World”… and Kevin was my tech editor. It should be intuitively obvious, even to the most casual observer, that Kevin stole that idea from me! DamiÃ¡n should have cited me in front of a bunch of game developers! So there!
Anyway, DamiÃ¡n’s point was not only how Kevin and I meant it — observing how people and animals do their thing, but also in being a very detailed and critical observer of your AI. There must be a discipline that scrubs out any little hiccup of animation or behavior before they pile up into a disjointed mess of little hiccups.
Jeff agreed to some extend but related something interesting from the development of F.E.A.R. — he said that most development schedules start by laying out the behavior for each type of character and then, if there is time, you go back and maybe try to get them to work together or with the environment, etc. With F.E.A.R., they started from the beginning with trying to work on the coordinated behaviors. With all the shouting and chaos going on with these guys working against you, you don’t notice the little glitches of the individuals quite as much.
DamiÃ¡n backtracked and qualified his comments… not just hunting down everything that is wrong… but rather everything that is wrong that matters.
Look but Don’t Touch
If you are fighting for your life, you don’t notice the details as much.
John brought up an interesting helpful hint. He talked about how, by turning on God mode (invulnerability), you can dispense with the fight for survival and really concentrate on how the AI is behaving. If you are fighting for your life, you don’t notice the details as much.
I have to agree. That’s why when I go to places like E3, I would rather stand back and watch other people play so I can observe what’s going on with the AI. (I always fear that demo people at E3 are going to be put off by my refusal to join in.) This is one of the problems I have when I’m playing for my Post-Play’em articles. I get so caught up in playing that I don’t look for things anymore. One solution is to have a house full of teenagers… that gives you ample time to just sit and watch someone else play games. I recommend it to anyone. (I believe John and DamiÃ¡n have started their respective processes of having teenagers… eventually.)
Emergence… for Good or Evil
If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting.
In response to a question asking if AI has ever done anything unexpected, DamiÃ¡n spoke about how the sacred quest of emergence isn’t always a good thing. He said that emergent behavior must fall out of the AI having the freedom to do things. If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting. Because of that, emergence has a really high cost in that it can be more of a drag on gameplay than the occasional gem it might produce.
Christian qualified the question a little more by asking if there was a time when the emergent behavior was found well before ship and the team thought it was something to encourage. He cited examples from his own experience. John talked about emergent gameplay that resulted from simple rules-based behavior in Bioshock. You could set a guy on fire, he would run to water, then you could shock the water and kill him. Was it necessary? No. Was it fun for the player to be creative this way? Sure.
To Learn or Not to Learn?
One question that was asked was about using machine learning algos. Christian started off with a gem about how you can do a lot of learning just by keeping track of some stats and not resorting to the “fancier stuff.” For example, keeping track of pitches the player throws in a baseball game doesn’t need a neural network. He then offered that machine learning can, indeed, be used for nifty things like gesture recognition.
Oh… are you using neural networks?
Jeff admitted that he hates the question that comes up when people find out he does AI in games. They often ask him, “Oh… are you using neural networks?” I think this really hits home with a lot of game AI folks. Unfortunately, even a cursory examination of the AI forums at places like GameDev.net will show that people still are in love with neural networks. (My quip is that it is because The Terminator claims to be one so it’s real sexy.) Thankfully, the AIGameDev forum is a little more sane about them although they do come up from time to time. Anyway, Jeff said he has never used them — and that he’s not even sure he understands them. While he thinks that NNs that are used in-game like in Creatures or Black & White are cool, they are more gimicky and not as useful with the ever-increasing possibility space in today’s games.
I Have a Plan
It is hard to accommodate the ever-changing goals of the human players.
Jeff acknowledged that the planner revolution that he started has graduated to hierarchical planners to do more complex things like squad interaction. However, one big caveat that he identified was when you incorporate humans into the mix alongside the squad. It is hard to accommodate the ever-changing goals of the human players.
This, of course, brought us back to the idea of conveying intent — one of the nasty little problem spaces of game AI. DamiÃ¡n explained it as a function of interpretation rather than simply one of planning. I agree that this is going to be a major issue for a long time until we can crack the communication barrier such as voice recognition and natural language processing. Until we can tell our AI squad-mates the same thing we can tell our human squad-mates and expect a similar level of understanding, we are possibly at an impasse.
Moving Forward by Standing Still
As the worlds become more complicated, we have to do so much more work just to do the same thing we did before.
Someone asked the panel what sort of things that they would like to see out of smaller, indie developers that might not be able to be made by the bigger teams. To set the stage, DamiÃ¡n responded with a Doug Church quote from the first AIIDE conference. Doug said that we have to “move forward by standing still.” As the worlds become more complicated, we have to do so much more work just to do the same thing we did before. (See DamiÃ¡n’s note about the LOS checks in Halo 2 and 3 above.) DamiÃ¡n suggested that the indie space has more opportunities to move forward with this because they aren’t expected to do massive worlds. Instead, they can focus on doing AI-based games.
This is what Matt Shaer’s interview with me in Kill Screen magazine was going after. With my Airline Traffic Manager as one of his examples, he spoke specifically about how it is the smaller, dedicated developer who is going after the deeper gameplay rather than the bigger world. I hope this is the case. We have seen a few examples so far… there are likely more to come.
Someone mentioned seeing a demo of improved LOS checks by using the GPU and asked if AI programmers should be pushing that more. John somewhat wittily suggested that AI programmers won’t be using the graphics chips until someone can find a system where a graphics chip isn’t being used at all. DamiÃ¡n was a little more direct in saying that the last people he wanted to work in conjunction with was the graphics programmers. This reminds me of a column I wrote on AIGameDev a few years back, Thou Shalt Not Covet Thy Neighbor’s Silicon. The graphics guys finally got “their space.” They aren’t letting us have any of it any time soon!
DamiÃ¡n pointed out that most advanced AI needs a lot of memory and access to the game state, etc. Those are things that the GPU isn’t really good at. About the only thing that you could do without those things is perhaps flocking. I agree that there really isn’t a lot we can do with the GPU. He did concede that if ATI wanted to do a raycasting chip (rather than borrowing time from the hated graphics guys) , that would be beautiful… but that’s about it.
Director as Designer?
Someone asked about the possibility of seeing the technology that was behind Left 4 Dead’s “AI Director” being used as a sort of game designer, instead. DamiÃ¡n pointed out that the idea of a “meta-AI” has been around for years in academic AI and that it is now really starting to get traction in the game world. I agree that the customized gameplay experience is a major turning point. I really like the idea of this as it really comes down to a lot of underlying simulation engine stuff. That’s my wheelhouse, really.
Where to Now?
They closed with some commentary on the future of game AI which I will leave to you to listen to. Suffice to say that for years, everyone has been expecting more than we have delivered. I’m not sure that is because we are slacking off of because they have vastly underestimated what we have to do as AI designers and programmers. At least, with all the attention that is being brought to it through the Game AI Conference that AIGameDev puts on, the AI Summit now being a regular feature at GDC, and other such events, we are going to be moving forward at some speed.
Using Java apps of Conway‘s famous “Game of Life” and variants on it, they describe and show, in very simple terms, how emergence… er… emerges from very simple rules. The individual blocks don’t understand the big picture… nor do they care. They just follow their own local rules. However, we perceive behaviors and interactions that aren’t really there.
Wow… back-to-back greats posts from Ted Vessenes on his blog, Brainworks. In this one, he writes about a concept very close to my heart – that of mathematical balancing. In the post, Getting it Just Right he mentions two types of parameters – sensitive and insensitive numbers. From the post:
A sensitive number is extremely hard to tweak, because you won’t see the right behavior until you get things “just right”. If the value doesn’t encode the right concept, then that “just right” state won’t exist, but you’ll never know that. You’ll just see how all kinds of different values don’t work in different ways.
And an insensitive number generally just has an impact when it crosses an important boundary (for example, driving 56 instead of 55, or your gas tank being 0% full instead of 1% full). There’s often no indication where this interesting numerical boundary might be.
I mentioned on a comment I posted there that I wished he had written that a few weeks ago. The ideas he presents map very well into a couple of my recent columns over at AIGameDev. One, on Chaos Theory and Emergent Behavior hits a similar nerve as far as tiny changes having big ramifications. The other was about intelligent-looking errors. Both are applications of trying to get those parameters into the ever-elusive “sweet spot” – both in the short term (believability) and the long term (stability). I would have liked to quote him on those columns.
I’m definitely keeping an eye on Ted’s blog from now on!
I noticed this GamePro blurb about the upcoming sequel to F.E.A.R. Here’s an excerpt…
“The most obvious difference that will hit the player right away is in the visual density of the world,” said Mulkey. “F.E.A.R. looked really great, but where F.E.A.R. would have a dozen props in a room to convey the space, Project Origin will have five times that much detail.
“Of course, this will only serve to further ratchet up that ‘chaos of combat’ to all new levels with more breakables, more debris, more stuff to fly through the air in destructive slow motion beauty.”
OK… I can dig that. One thing I noticed as I played through F.E.A.R. is that things were kinda sparse. (I really got tired of seeing the same potted cactus, too.)
The part that I am curious about, however is this:
… Mulkey says improved enemy behavior is at the top of the list.
“We are teaching the enemies more about the environment and new ways to leverage it, adding new enemy types with new combat tactics, ramping up the tactical impact of our weapons, introducing more open environments, and giving the player the ability to create cover in the environment the way the enemies do,” he says.
Now that is the cool part. When the enemies in the original moved the couches, tables, bookshelves, etc. it was cool… but rather infrequent. I was always expecting them to do more with it. If they are both adding objects to the environment and then “teaching” the agents to actually use those objects, we may see a level of environment interactivity that we’ve never experienced before.
The cool thing about their planning AI structure is that there isn’t a completely rediculous ramp-up in the complexity of the design. All one needs do is tag an object that it can be used in a certain way and it gets included into the mix. On the other hand, having more objects to use and hide behind does increase the potential decision space quite a bit. It’s like how the decision tree in chess is far greater than that of Tic-tac-toe because there are so many more options. The good news is that the emergent behavior level will go through the roof. The bad news is that it will hit your processor pretty hard. Expect the game to be a beast to run on a PC.
I certainly am looking forward to mucking about with this game!
Of the top 5, I’m the most excited about an increase in sandbox games and emergent behaviors. Really, I see these two as almost interlinked. Sandbox games not only allow emergent behavior to proliferate – they almost require it to do so in order to keep immersion.
Likewise, interagent cooperation was another of the top 5 on the list. Again, this is something that I see as related to emergent behavior. If you leave your cooperation loosely defined rather than pre-scripted, you will see a lot of emergent behavior as a result.
I hope to get more a feel about this very topic at the GDC roundtables and lectures next month. That is always a great way to take the pulse of the industry. Anyway, good stuff on the list.
Regular readers of my other blog, Post-Play’em will know that I talked about the idea of scripts over-riding AI behaviors in Call of Duty 2 in a post entitled Call of Duty 2: Omniscience and Invulnerability. Specifically, this was in reference to one of the behaviors mentioned in the other article where an AI agent takes on a temporary god-like quality of invulnerability until such time as he finishes a scripted event – at which time he is no longer important to the level designer’s wishes and is cast back into the pot of cannon fodder so that I can mow him down properly.
Getting back to the initial topic, my thought is that part of the issue between artists/level designers and programmers may very well be that the level designers don’t have a trust in the capabilities of autonomous AI agents… or even and understanding of what could be done with them.
For example, with the use of goal-based agents such as those found in F.E.A.R. (related post), rather than a designer saying “I want the bot to do A then B, then C on his way to doing the final action of D.” he could simply tell the goal-based agent that “D is a damn good goal to accomplish.” If constructed properly, the agent would then realize that a perfectly viable way of accomplishing D would be via A-B-C-D. The difference between these two methods is important. If C is no longer a viable (or intelligent looking) option, then the scripted bot either gets stuck or looks very dumb in still trying to accomplish D through that pre-defined path. The very nature of planning agents, however, would allow the agent to try to find other ways of satisfying D. If one exists, he will find it. If not, perhaps another goal will suffice.
The problem is, while AI programmers understand this concept (especially if you are the one who wrote the planner for that game), level designers and particularly artists, may not have an intuitive grasp on this. They are cut more from the cloth of writers – “and then this happened, and then this, and then it was really cool when I wrote this next thing because I wanted the agent to look smart, and then this…” That is being a writer - and is why many games continue to be largely linear in nature. You are being pulled through an experience on a string of scripted events. (See related post on Doom 3’s scripting vs. AI)
So, can the problem of designers trumping AI programmers be solved? It will always be there to some extent. But education and communication will certainly help the matter.
I made a comment/rant on the post myself, which I will repost here just for the sake of saking.
I think part of the problem that continues to be the fence between the game AI world and academia is the game worlds continued insistence that we have to strip down our AI to “fake AI” in order to wedge it into games.
I’m getting tired of the rubber stamp statements that “our players don’t want realistic behaviors… they want FUN behaviors!” And yet, in review after review of the latest games, people bitch about the AI not being realistic enough. We hear it. We acknowledge it. But when it comes to developing the next cycle, the edict from on high is “we don’t have enough clock cycles to do that nifty XYZ technique.”
As Moore’s Law trips merrily along from year to year, we have more and more processing available to us. In theory, that should give us, as game developers, the overhead we need to close the gap between the need for 60 FPS in our games and the academics who don’t really care if they are rendering their half-ass, low poly bots at 4 FPS.
Another point on this subject… I’m sick of hearing designers – and even AI programmers – make the statement “but it’s not predictable!” about agent-based, emergent AI. Uh… isn’t that the point? Again, look at the reviews and the comments from our customers. “The AI sucks because it is too predictable.” Even the implication via statements such as “you can beat this level by doing XYZ to the AI because…” means that there is a shallowness to our creations. Why? Is it because we are lazy and don’t want to write more complicated code? Is it because we are scared of the unpredictability of non-deterministic models? Is it because our designers would better be served writing static movie screen-plays than game levels? What holds us back?
I’m not saying that academia is the answer. Sometimes it seems that they can get so wrapped up in an esoteric sojourn that they cease to realize that what they are doing is not even remotely relevant. However, some of the concepts and techniques that they take the time to explore (because they don’t have producers and ship-dates) are things that can map over into the game world. And, if we are truly interested in putting realism into our games (which can be fun for the player!), then what academia comes up with should be noted by us. Adapted maybe, but noted nonetheless.