IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Archive for the ‘Opinion’ Category

Everything is Excellent… except the AI

Friday, May 28th, 2010

This is something that I have been seeing for a long time now. I’m sure we all have… and for good reason: A lot of times it is true. Despite what I said in my 7 minutes at the AI Devs Rant at the 2010 GDC about how reviewers like to bitch about bad AI, unfortunately too often it is justified. The juxtaposition of an otherwise good game in other areas with poorly executed AI development is a bit more tragic, however. That doesn’t point to a case of a smaller budget game. It’s an example of a well-funded project with either a priority or a talent problem.

Take this review of “Lost Planet 2” in the Detroit News by Mike Neimoyer. Let me first give you the commentary about the rest of the aspects of the game:

The storyline is thinly tied together and barely cohesive.

OK… admittedly he gripes about the storyline. A lot of times that simply because you are doing a sequel of an established IP. Moving on… (emphasis mine):

The graphics are beautiful, and the environments are varied. Lush jungle or swamp areas appear suddenly in the midst of the glacial ice fields, with waterfalls and towering trees, and then there’s parched desertscapes or weather-battered coastal regions.

Not only is the landscape new and varied, but the Akrid, the natural inhabitants of the planet E.D.N. III, are back with new shapes, new designs and a whole new set of (usually grumpy) attitudes. The scale of the largest of the creatures, the “Cat-G type” is impressive to say the least. Like, God of War III-scale impressive.

The voice acting, for the most part, is well-done and up to the task. The music itself, however, is excellent. Swelling orchestral pieces accentuate the action sequences, and give the game an epic feel that would have been missing if they had used some “generic rock-style track #2” soundtrack. Well done, Capcom.

Ok… so we have this love fest on the environment, the character modeling, the voice acting, and the soundtrack (he even mentions later that he would love to have a CD of the soundtrack). So what can possibly be wrong? Here’s a montage (again, emphasis mine):

The game is designed from the ground up to take advantage of four-player cooperative play. And heaven help you if you don’t have friends to play the game with. As 1UP.com states, “Brain-dead, unhelpful, and unresponsive, the computer-controlled team members are a liability rather than a resource.

You truly, truly need a human companion or three to completely appreciate what Lost Planet 2 has to offer. For example, during one big boss battle, there are four separate tasks that need to be completed simultaneously. With four humans working together, this bit of teamwork wouldn’t be too difficult. Unfortunately, if you’re playing solo with AI teammates, you’re pretty much left to a snarled tangle of frustration and trial-and-error.

The level design is adequate, but I think that too much emphasis was put on the multiplayer portion, and not enough consideration for the solo player who will be reduced to using the criminally stupid AI companions.

Damn… so we have a Left 4 Dead-style game that is based around the idea that you have to cooperate with your teammates in order to not only survive but to actually complete mandatory parts of the campaign… and yet they don’t provide you with the companions that can do so.

Early on we were all letting our enemies die quickly because we lacked the capability to make them smarter.

In the era of single-player, shooting gallery-style games, having sub-par AI wasn’t too bad. After all, our fallback mantra was “the enemy won’t live long enough to show off his AI anyway.” I knew that was just a bad crutch when we were all saying it. The truth is, early on we were all letting our enemies die quickly because we lacked the capability to make them smarter! We were actually relieved that our characters were dying quickly. That managed to fit well with our other AI mantra: “Don’t let the AI do anything stupid.” Unfortunately, the chance of the AI doing something stupid rose exponentially with the amount of time that it was visible (and alive).

We are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore.

Now that we are expecting AI teammates or squadmates or companions to come along for the ride, we have a much harder challenge. (Back in 2008, I wrote about this in my (at the time) regular column on AIGameDev in an article titled “The Art of AI Sidekicks: Making Sure Robin Doesn’t Suck“.) The problem is, we are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore. Certainly, it takes more money and time to develop really good AI than it does to do a soundtrack, (I can speak to this, by the way… I was a professional musician a long time ago and am perfectly comfortable with anything from writing, arranging, and recording multi-track electronic grooves to penning entire sweeping orchestral scores. But that was in a previous life.) but it seems like a little effort might be called for. After all, the necessity of multi-player was built into the game design from the start… the necessity of a lush soundtrack was not.

To the defense of game companies, however, I’m very aware the good AI people are exceedingly and increasingly hard to find. The focus of the industry has changed in the past few years so that companies are trying to do better. However, that often means a lot more AI-dedicated manpower than they have. With many companies trying to find AI people all over the place, the demand has really out-stripped the supply. Some companies have had ads up for AI programmers for 6 to 9 months! They just aren’t out there.

Perhaps this is a time to pitch
Intrinsic Algorithm’s
AI consulting services?

And now back to our program…

So it isn’t always that the company doesn’t care or won’t spend the money on it. It’s often just the fact that AI is a very difficult problem that calls for a very deep skill set. Unfortunately, most of the game programs that exist really don’t even address game AI beyond “this is a state machine”. Academic AI programs are good for “real world AI” but don’t apply to the challenges that the game industry needs. Unfortunately, many academic AI institutions and their students don’t know this until they are rebuffed for suggesting very academia-steeped techniques that will fall flat in practice. (And no, a neural network would not have saved the AI in Lost Planet 2.)

So… in the mean time, here’s the suggestion: If your people don’t have the chops to make the required minimum AI, don’t design a game mechanic that needs that AI.

Dating Sims: A New Frontier for RPG AI?

Thursday, May 27th, 2010

It’s amazing what pops up in my daily Google Alerts for “game AI” (although I’m getting tired of Allen Iverson news). This one caught my eye, though. On a blog called “Win My Ex Back” (no thank you, by the way), there was a post titled Online Dating Sim Game. There wasn’t a lot of detail about a specific game other than to mention online that “dating simulation games are among the new genres of online gaming that depicts romance.” To give an idea of what the author (Andy Jill) has in mind, I quote his summary:

It’s a simulation game where the main character that you’ll play (commonly fictional characters) has to achieve specific goals. The most typical one is to date numerous and different women and to have high level of relationship and among them within specified time limit. Generally, the game character must have enough funds by either securing jobs or other income-generating activities such as business.

In the same manner, attributes of the character is important in the game. Such attributes can be improved by doing different task and accomplishing it within the time limit. Most of these tasks take time to be accomplished and games of this type have real-time to them.

The author goes on to describe what apparently was the first online dating Sim, “the Dokyusei or Classmates” from 1992. Again to quote:

In this classic dating sim game, you will be controlling a male avatar that is surrounded by various female game characters. The game play will involve conversations with a variety of artificial intelligence (AI)-controlled girls, in which you will attempt to increase your “internal love meter” by means of right choices of dialogues. The game usually last for a limited game time like a month or a year.

Once the game is finished, your character may lose the game if it failed to win the hearts of any girl. However, you may “finish” one or more girls, usually by having sex with them or by attaining eternal love.

18 years later and we are still relegated to dialog trees.

To me, this sounds like the Sims titles… or for that matter, how some people try to play Mass Effect 2. Anyway, the point is not the gameplay mechanism (which is somewhat standard). Even the personality and mood meter facets are not all that uncommon. Again, think Sims 3. What I find almost joltingly alarming is that this game from 1992 was based on a dialog tree. Sure, that’s fine. We’ve had dialog trees for a while. The problem is… that’s how we would still have to do things! 18 years later and we are still relegated to dialog trees.

The only reason that the Sims doesn’t have dialog trees is that there isn’t real “dialog”. At least not the intelligible sort. Sure, there are behavior selection trees for choosing what to do next, but there is a subtle difference. When you select an action in the Sims, the game designer hasn’t necessarily hand-crafted what the response (or potential responses) will be. In a dialog tree, you are always in a specific place in that tree until you exit it. In the Sims, all that happens when you select an action is that you vaguely change the internal state of the character you are interacting with. That character’s actual response is calculated in a ridiculously expansive set of state values, formulas, and then a stochastic factor tossed in for good measure. You aren’t really ever sure exactly what you are going to get… although you may have a good idea what it might be.

On the user input level, it is still reminiscent of Zork or the early Ultima games.

I have to assume that this dating sim — and all others like it — would rely on a representation of actual dialog, however. And that brings us back to the dialog tree. Natural language processing isn’t going to cut it. Emily Short does a good job of it in her interactive fiction but the root of it all is still keyword-based input. As amazing as her work is, on the user input level, it is still reminiscent of Zork or the early Ultima games. Translated into a dating sim, the user’s free-form input could very well be reduced to “ask job”, “tell age”, and “compliment boobs”. In effect, it wouldn’t be all that different than the chat room shorthand of “a/s/l” for “age/sex/location”. Even if the game then gave elaborate (yet pre-scripted) answers to your questions, it will still be annoying to have it reply “I don’t understand what you mean” when you don’t guess the right keywords to use. Additionally, using that sort of shorthand isn’t going to ever feel really “romantic” is it?

I’m halfway through Noah Wardrip-Fruin’s book, Expressive Processing, where he talks extensively about the history and state of interactive dialog and drama. Even with all the work that has gone into this field for over 40 years, I’m sorry to say that we are no where near being able to replicate something other than a stilted parody of a romantic courtship.

That being said, it doesn’t matter how deep the AI is behind the scenes. Until we can solve the input/output problem our AI is trapped inside itself. Animation has gotten a lot better of late—especially facial and speech animation. I know there are adult-themed games out there and I assume that they are taking full advantage of photo-realism (not to mention realistic physics). However, all that nifty facial animation and subtle gesturing would still have to be tied to canned, pre-written dialog. And that is our bottleneck, isn’t it?

I don’t know how to solve it, really. Noah’s book is my first foray into even thinking about this interactive dialog and fiction stuff. (On the other hand, I would be tickled to do the personality, mood, and emotion modeling. That is in my wheelhouse!) That said, I don’t know how to solve it. I just find it sad that we are still stuck in this world where we can’t really interact with our game characters on a meaningful, natural-feeling level. I do know, however, that when we find it, that will be one of the final cusps we need to cross over in games. At that point, there’s not a lot stopping us.

Euphoria over Bad AI in Backbreaker?

Wednesday, May 26th, 2010

I was looking at a review of the new football game, Backbreaker, by the blogger, Pastpadre, and I found an interesting combination of observations. First, for those that don’t know, Backbreaker is a football game that is developed by NaturalMotion. They are known first and foremost for their Euphoria physics engine that creates contextually realistic human body motion. Seeing as one of the biggest complaints about sports games—and football ones in particular—is that the human body physics begins to look canned and repetitive, you would think NaturalMotion had a bit of a head start in that area. The problem is, that isn’t all people gripe about with football games.

While I commend NaturalMotion for attempting to move things forward in this area, there are plenty of things that need to be addressed, if not solved, if the genre is to advance further. Physics isn’t necessarily on the top of the list. But hey, that’s what they do.

…coordinating 11 people being interfered with by 11 other people is a tall order.

This is particularly close to my heart because I’m an AI designer and a huge football fan. I am especially fond of football because of the deep intricacy of the team-based strategy that has to happen on every play. Of course, this is exactly the issue that is the hardest to address from an AI standpoint. Sports games—again, football ones in particular—are ridiculously hard to craft good AI for. For an industry that struggles to put together FPS squad tactics for 2 or 3 people, coordinating 11 people supposedly working together while being interfered with by 11 other people who are also working together is a tall order. The Madden franchise has been doing a passable job of this for some time. Sure, there are golden paths that bubble to the surface all the time, but those seem to be fewer and farther between.

Anyway, in this review, the author points out some interesting frustrations. He addresses it briefly in the first paragraph but I believe it summarizes things well (emphasis mine):

Reaction has been mixed with most gamers enjoying the Euphoria physics, polarization on the single camera angle, and the troubling CPU AI leading to the most concern.

(Brief aside: Who uses “CPU AI”? Not only is that redundant, it says the same thing twice.)

I will skip over his impressions of how Euphoria works. If you want to know all that happy-happy stuff, you can watch a Euphoria sales reel. I will address the AI-specific stuff. He goes on to comment about some of the specifics of how the AI falls flat on its face (emphasis mine).

The offensive output by the CPU has been pitiful. I’ve yet to give up more than a couple first downs on a single drive and still haven’t been scored on. The biggest reason is that the CPU turns the ball over a lot. In four minutes of gameplay it’s been close to an average of three picks thrown by the CPU. In the final demo video I posted I had three picks in three drives off the best offensive team in the game. That was with me being out of the play in all instances and the CPU just making bad passes.

This is summed up by the clincher which is my point here:

No matter how great the physics are I would not be able to play a football game if the CPU throws 10+ picks each time out.

No matter how great the physics are I would not be able to play a football game…

This certainly seems to be an example of tunnel-vision on a pet feature while ignoring (or being incapable of addressing) the rest of the game that people actually want to experience. Is this a Euphoria sales demo or a football game? This is something that is more prevalent in the game industry that we care to admit. It isn’t just Euphoria—or even physics—as the bad guy either. Swap in “game design”, “story”, “cool weapons”, “sexy chick outfits”, “huge environments” or whatever. AI is often the expression of your world. If your AI is broken, it severs the emotional connection to the game.

He continues:

The CPU goes with a jump pass way too often, whether it be springing forward or backwards, many times resulting in an interception. These aren’t instances where jumping to make a pass even would make some sense as the CPU would have been better off with their feet set.

Again, this apparently is broken decision logic. For those that don’t know football, in the pros a “jump pass” is a rare event only used in certain situations. Commentators will typically hammer on a QB for not throwing with his feet set. In fact, theoretically, you could do a football game without even including an animation for a jump pass and no one would really notice all that much. Therefore, for the author to notice that this happening too often is rather telling.

More:

The CPU defensive back AI has been terrible in instances where they aren’t running in stride. When they continue to run in stride they seem to play the ball pretty well. If they stop (like on a comeback route or a pass lobbed up for grabs) they’ll start to go the wrong way, make a terrible attempt at the ball, or just stand there. Several times I’ve completed passes with multiple defenders in the area who played the ball horribly wrong. They’ve just stood there and watched the ball go over their heads or watched the receiver make an easy catch.

Again, I’m guessing this is either lazyness on the part of the development team, not knowing about football, or an inability to solve the problem. I hope it is the latter. The 2nd one is not acceptable if you are actually making a football game. The first… well…

A few more. Apparently it is not just the player AI that is troubling:

Penalties have been really iffy to say the least. I’ve seen roughing the passer called in multiple situations when it shouldn’t have been. I have seen a pass interference [im]properly called in two instances, called once when there was clearly no interference, and in several other situations seen receivers taken completely out before the ball arrives and no penalty called. There also seems to be an issue with roughing the kicker (primarily on punts) where your CPU controlled players commit the penalty way too often and out of the user’s control. I haven’t seen this one much but it has been widely reported.

So this has to do with the logic for detecting penalty situations. These should be, in effect, simple rule-based systems. For example,

if (contact(player, receiver)
&& !TouchBall(receiver)
&& CatchableBall(receiver))
&& ((BallThrown() || ReceiverDownField())...

If you are botching up your static rule-based systems, then doing the contextual player-reaction AI is going to be a bitch.

Naturally, bad AI tends to lead to exploits:

Exploits have already been found with QB sneaks and the blocking of punts and field goals. These things could really damage the online play experience. The QB sneak problem, combined with the ability to no-huddle because of the lack of fatigue and not having to worry about injuries, could ruin online play. If blocked punts and kicks are prevalent online everyone will end up going for it on 4th downs.

If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”.

This is the natural result… and is always a game-killer. If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”. Many games with bad AI could still thrive in the online world. However, in a game where you only control 9% of your team and are entirely dependent on the other 91% for success, you can still do all the right things and still get rolled. That is not fun.

My point with all of this really has very little to do with the game itself and really less to do with the Euphoria engine. In fact, a quick browse through YouTube shows that there are some people who think the AI is just fine (although watching the videos and descriptions shows that people don’t really know what AI is or what good AI might look like). That being said, your mileage may vary. My point was that of the juxtaposition of the two points the author was making: you need more than pretty physics to make a compelling game.

This is really only a modified version of the graphics vs. AI debates.

This is really only a modified version of the graphics vs. AI debates. Originally, studios made pretty games with bad AI (and even bad physics). Now we seem to have moved on to making better physics… and with products like Euphoria, even better physics that take the load off of AI programmers trying to figure out what human reactions should be. None of that solves stupid AI play, though. And until we do that, we are going to be seeing otherwise decent games get shelved.

Fritz Heckel’s Reactive Teaming

Tuesday, May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

Raph Koster on Dynamic POIs

Monday, May 24th, 2010

Raph Koster, best known as the lead designer of Ultima Online and the creative director for Star Wars Galaxies, wrote an interesting post on his blog at the end of April. In the post, “Dynamic POIs“, He discusses how, in Star Wars Galaxies, they constructed a method for the computer to generate automatic content in the MMO world.

The term POI stands for point of interest and dates back to his UO days. At the time, those POIs were hand-placed by the design team. In a world the size of SWG, this was simply not feasible. He comments how even a room full of junior designers churning these encounters out was simply not enough. The proposed solution was to algorithmically generate these locations and fill them with life.

One of the complicating factors in this was that they weren’t simply placing the D&D staple of a wandering monster. They were generating a camp or facility full of these agents complete with scenery… and plot. Here’s a quote from the article that explains it better.

Don’t roll up just a bandit; roll up a little bandit campsite, with a tent, a campfire, three bandits, one of whom hates one of the others, a young bandit who isn’t actually a bad guy but has been sucked into the life because he has a young pregnant wife at home… In fact, maybe have an assortment of bandits — twenty possible ones maybe. Then pick three for your camp. That way you always get a flavorful but slightly different experience.

So what we are looking at is a random type of encounter with a random population, in a random location. So far that’s pretty groovy. If you want a description of what it was like, SWG producer, Haden Blackman writes about his own encounter with a dynamic POI.

One advantage that they had with SWG that didn’t exist with UO is that the SWG map was procedurally generated in the first place. That made generating a random encounter simply an extension that was over and above the map generation. They could create and destroy these places by simply making sure there was no one around.

…it was just as hard to create the dynamic content as it was to create static content.

In order to provide some variety, these encounters could also be designed modularly. The actors might be different or they may send you on different missions than a similarly constructed encounter elsewhere. The n-complexity of available encounters rises fairly quickly that way, of course.  Unfortunately, Raph says they weren’t quite able to make this whole process completely data-driven. The result is that it was just as hard to create the dynamic content as it was to create static content. In particular, the scripting of the encounters was tricky. The result of all of this is that dynamic POIs ended up coming back out of the game.

He hasn’t written off the entire idea yet, however. Neither have I. In fact, I spoke about it a little at GDC Austin in 2009. (Get my slides here.) Raph was not able to attend that lecture (for which I believe Sheri Graner Ray gave him a tongue-lashing). I don’t pretend to solve the problem of dynamic content to the extent that Raph writes about. The point I addressed was that there are ways for AI-controlled NPCs to dynamically deal with a shifting population and environment that can lead to more expressive encounters game-wide.

Anyway, with the advent of dynamic pacing in Left 4 Dead via the much-referenced AI Director, there is a lot more attention being paid to how we can break out of entirely hand-placed, hand-paced, and often linear content. Additionally, while the public’s appetite for “sandbox” worlds is increasing — thanks in no small part to games like GTA4, the industry can’t sustain GTA4-like budgets of $100 million for very long. If the procedural content issue was solved (or at least furthered along somewhat) then we can satisfy both the demands of our players and the restrictions of budget.

One place to look might be to the work that is being done by the likes of interactive fiction writer and programmer Emily Short and the work that is continually being done by Michael Mateas and the Expressive Intelligence Studio company and UCSC. Someplace in there is a hybrid of dynamic, data-driven storytelling that we can eventually work with to create truly open-ended content.

We can only hope… and work.

Boston All-Stars Weigh in on AI

Monday, February 15th, 2010

Back in November, there was a get-together of Boston Post Mortem (billed as “games and grog, once a month”) that had a panel of local AI folks. The panelist’s names are familiar to many of us… Damián Isla, Jeff Orkin, and John Abercrombie. It was moderated by Christian Baekkelund whom I had the opportunity to have dinner with in Phily when I was in town for the GameX Industry Summit. Thankfully, this panel was recorded by Darius Kazemi and posted on his Vimeo site and on the Boston Post Mortem page. I embed it here for simplicity’s sake.

Anyway, a few comments on the video:

You’re Doing it Wrong

The first question to the panel was “what do new AI developers do wrong” or something to that effect. Damián set up the idea of two competing mentalities… gameplay vs. realistic behavior. He and John both supported the notion that the game is key and that creating a system just for the sake of creating it can range anywhere from waste of time to downright wrong.
…create autonomous characters and then let the designers create worlds…
The thing that caught me was Jeff’s response, though (5:48). His comment was that AI teams can’t force designers to be programmers through scripts, etc. That’s not their strength and not their job. While that’s all well and good, it was his next comment that got me cheering. He posited that it is the AI programmers job to create autonomous characters and then let the designers create worlds that let the characters do their thing in.
Obviously, it isn’t a one way street there… the designers job isn’t to show off the AI. However, I like the idea of the designers not having to worry about implementing behavior at all — just asking for it from the AI programmer and putting the result into their world. John’s echo was that it’s nice to build autonomous characters but with overrides for the designers. It isn’t totally autonomous or totally scripted. This sounds like what he told me in his Bioshock experience when I talked to him about it a few years ago.
I happen to agree that the focus needs to be on autonomy first and then specific game cases later. The reason for this is too often the part of the AI that looks “dumb” or “wrong” is when the AI isn’t being told to do anything specific. For example, how often would you see a monster or soldier just standing there? Some of the great breakthroughs in this were from places like Crytek in Far Cry, Crysis, etc. The idea of purposeful-looking idle behaviors was a great boon to believable AI.

The other advantage to creating autonomy first was really fleshed out by Jeff Orkin’s work on F.E.A.R. (Post-Play’em) No more do designers (or even AI programmers) have to worry about telling an agent exactly what it should do in a situation. By creating an autonomous character, you can simply drop them in a situation and let them figure it out on their own. This can be done with a planner like Jeff did, a utility-based reasoner, or even a very detailed behavior tree. Like John said above, all you need to remember is to provide the override hooks in the system so that a designer can break an AI out of its autonomy and specifically say “do this” as an exception rather than hand-specifying each and every action.
What’s in Our Way?
The next question was about “the biggest obstacle” for game AI moving forward. Jeff’s first answer was about authoring tools. This has been rehashed many times over. John expressed his frustration at having to start from scratch all the time (and his jealousy that Damián didn’t have to between Halo 2 and 3).
…to get your AI reviewed well, you need to invest in great animators.
Damián’s comment was amusing, however. He suggested that to get your AI reviewed well and have players say “that was good AI”, you need to invest in great animators. This somewhat reflects my comment in the 2009 AI Summit where I pointed out that AI programmers are in the middle of a pipeline with knowledge representation on one side and animation on the other. It doesn’t do any good to be able to generate 300 subtle behaviors if the art staff can only represent 30 of them.
On the other hand, he reiterated what the other guys said about authoring tools and not having to re-invent the wheel. He supports middleware for the basic tasks like A*, navmesh generation, etc. If we don’t have to spend time duplicating the simple tasks over and over, we can do far more innovation with what we have.
That’s similar to my thought process when I wrote my book, “Behavioral Mathematics for Game AI“. You won’t see a description of FSMs, A*, or most of the other common AI fare in the book. How many times has that been done already? What I wanted to focus on was how we can make AI work better through things other authors haven’t necessarily covered yet. (Ironically, it was Jeff Orkin who told me “I don’t think anyone has written a book like that. Many people need to read a book about that. Heck… I’d read a book about that!” — Thanks Jeff!)
What Makes the Shooter Shot?
The next question (11:45) was about what FPS-specific stuff they have to deal with.
When Halo 3 came out, they could afford fewer raycasts than Halo 2.
Damián talked about how their challenge was still perception models. They really tried to do very accurate stuff with that in Halo. He pointed out that raycasting is still the “bane” of his existence because it is so expensive still. Despite the processors being faster, geometry is far more complex. Alarming note: When Halo 3 came out, they could actually afford fewer raycasts than on Halo 2. Now that sucks! Incidentally, the struggle for efficiency in this area very much relates to Damián’s “blackboard” interview that I wrote about last week.
Interestingly, Jeff argued the point and suggested that cheating is perfectly OK if it supports the drama of the game. I found this to possibly be in conflict with his approach to making characters autonomous rather than scripted. Autonomy is on the “realistic” end of the spectrum and “scripted” on the other. The same can be said for realistic LOS checks compared to triggered events where the enemy automatically detects the player regardless.
John split the difference with the profound statement, “as long as the player doesn’t think you’re cheating, it’s totally OK.” Brilliant.
AI as the Focus or as a Focusing Tool
Supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
In response to a question about what Damián meant about “AI as a game mechanic,” he offered an interesting summation. He said that, from a design standpoint, the AI deciding when to take cover and when to charge is as important as how much a bullet took off your vitality. That is, supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
He also pointed out that the design of a game character often started out with discussions and examples of how that AI would react to situations. “The player walks into a room and does x and the enemy will do y.” Through those conversations, the “feel” of a type of character would be created and, hopefully, that is what the player’s experience of that type of character would end up being.
In my opinion, a lot of this is accomplished by being able to not only craft behaviors that are specific to a type of enemy (the easy way of differentiation) but also parameterizing personality into those agents so that they pick from common behaviors in different ways. That is, something that all enemies may do at some point or another but different types of enemies do at different times and with different sets of inputs. I went into this idea quite a bit in my lecture from the 2009 AI Summit (Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters) where I talked about incorporating personality into game agents.
The Golden Rules of AI (20:30)
Christian started things off by citing the adage, “it’s more important to not look stupid than to look smart.” No big surprise there.
The player feels good about killing someone if the kill is challenging.
John said that the AI must be “entertaining”. My only problem with this is that different people find different things entertaining. It’s kinda vague. Better to say that the AI must support the design. Both John and Jeff extended this idea by talking about providing a challenge… the player feels good about killing someone if the kill is challenging.
Damián sucked up to my buddy Kevin Dill a little bit my citing a quote that he made in our joint lecture at the GameX Industry Summit, The Art of Game AI: Sculpting Behavior with Data, Formulas, and Finesse. Kevin said that AI programmers must be an observer of life. I certainly agree with this notion… in fact, for years, my little tag at the end of my industry bios has said, “[Dave] continues to further his education by attending the University of Life. He has no plans to graduate any time soon.” In fact, Chapter 2 of my book is titled “Observing the World”… and Kevin was my tech editor. It should be intuitively obvious, even to the most casual observer, that Kevin stole that idea from me! Damián should have cited me in front of a bunch of game developers! So there!

Anyway, Damián’s point was not only how Kevin and I meant it — observing how people and animals do their thing, but also in being a very detailed and critical observer of your AI. There must be a discipline that scrubs out any little hiccup of animation or behavior before they pile up into a disjointed mess of little hiccups.
Jeff agreed to some extend but related something interesting from the development of F.E.A.R. — he said that most development schedules start by laying out the behavior for each type of character and then, if there is time, you go back and maybe try to get them to work together or with the environment, etc. With F.E.A.R., they started from the beginning with trying to work on the coordinated behaviors. With all the shouting and chaos going on with these guys working against you, you don’t notice the little glitches of the individuals quite as much.
Damián backtracked and qualified his comments… not just hunting down everything that is wrong… but rather everything that is wrong that matters.

Look but Don’t Touch
If you are fighting for your life, you don’t notice the details as much.
John brought up an interesting helpful hint. He talked about how, by turning on God mode (invulnerability), you can dispense with the fight for survival and really concentrate on how the AI is behaving. If you are fighting for your life, you don’t notice the details as much.
I have to agree. That’s why when I go to places like E3, I would rather stand back and watch other people play so I can observe what’s going on with the AI. (I always fear that demo people at E3 are going to be put off by my refusal to join in.) This is one of the problems I have when I’m playing for my Post-Play’em articles. I get so caught up in playing that I don’t look for things anymore. One solution is to have a house full of teenagers… that gives you ample time to just sit and watch someone else play games. I recommend it to anyone. (I believe John and Damián have started their respective processes of having teenagers… eventually.)
Emergence… for Good or Evil
If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting.
In response to a question asking if AI has ever done anything unexpected, Damián spoke about how the sacred quest of emergence isn’t always a good thing. He said that emergent behavior must fall out of the AI having the freedom to do things. If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting. Because of that, emergence has a really high cost in that it can be more of a drag on gameplay than the occasional gem it might produce.
Christian qualified the question a little more by asking if there was a time when the emergent behavior was found well before ship and the team thought it was something to encourage. He cited examples from his own experience. John talked about emergent gameplay that resulted from simple rules-based behavior in Bioshock. You could set a guy on fire, he would run to water, then you could shock the water and kill him. Was it necessary? No. Was it fun for the player to be creative this way? Sure.
To Learn or Not to Learn?
One question that was asked was about using machine learning algos. Christian started off with a gem about how you can do a lot of learning just by keeping track of some stats and not resorting to the “fancier stuff.” For example, keeping track of pitches the player throws in a baseball game doesn’t need a neural network. He then offered that machine learning can, indeed, be used for nifty things like gesture recognition.
Oh… are you using neural networks?
Jeff admitted that he hates the question that comes up when people find out he does AI in games. They often ask him, “Oh… are you using neural networks?” I think this really hits home with a lot of game AI folks. Unfortunately, even a cursory examination of the AI forums at places like GameDev.net will show that people still are in love with neural networks. (My quip is that it is because The Terminator claims to be one so it’s real sexy.) Thankfully, the AIGameDev forum is a little more sane about them although they do come up from time to time. Anyway, Jeff said he has never used them — and that he’s not even sure he understands them. While he thinks that NNs that are used in-game like in Creatures or Black & White are cool, they are more gimicky and not as useful with the ever-increasing possibility space in today’s games.
I Have a Plan
It is hard to accommodate the ever-changing goals of the human players.
Jeff acknowledged that the planner revolution that he started has graduated to hierarchical planners to do more complex things like squad interaction. However, one big caveat that he identified was when you incorporate humans into the mix alongside the squad. It is hard to accommodate the ever-changing goals of the human players.
This, of course, brought us back to the idea of conveying intent — one of the nasty little problem spaces of game AI. Damián explained it as a function of interpretation rather than simply one of planning. I agree that this is going to be a major issue for a long time until we can crack the communication barrier such as voice recognition and natural language processing. Until we can tell our AI squad-mates the same thing we can tell our human squad-mates and expect a similar level of understanding, we are possibly at an impasse.
Moving Forward by Standing Still
As the worlds become more complicated, we have to do so much more work just to do the same thing we did before.
Someone asked the panel what sort of things that they would like to see out of smaller, indie developers that might not be able to be made by the bigger teams. To set the stage, Damián responded with a Doug Church quote from the first AIIDE conference. Doug said that we have to “move forward by standing still.” As the worlds become more complicated, we have to do so much more work just to do the same thing we did before. (See Damián’s note about the LOS checks in Halo 2 and 3 above.) Damián suggested that the indie space has more opportunities to move forward with this because they aren’t expected to do massive worlds. Instead, they can focus on doing AI-based games.

This is what Matt Shaer’s interview with me in Kill Screen magazine was going after. With my Airline Traffic Manager as one of his examples, he spoke specifically about how it is the smaller, dedicated developer who is going after the deeper gameplay rather than the bigger world. I hope this is the case. We have seen a few examples so far… there are likely more to come.
Jeff cited interactive storytelling as a possible space for development in this area as well. This is what we are after with one of our sessions at the 2010 AI Summit when Dan Kline, Michael Mateas, and Emily Short deliver AI and Interactive Storytelling: How We Can Help Each Other.
AI on the GPU?
Someone mentioned seeing a demo of improved LOS checks by using the GPU and asked if AI programmers should be pushing that more. John somewhat wittily suggested that AI programmers won’t be using the graphics chips until someone can find a system where a graphics chip isn’t being used at all. Damián was a little more direct in saying that the last people he wanted to work in conjunction with was the graphics programmers. This reminds me of a column I wrote on AIGameDev a few years back, Thou Shalt Not Covet Thy Neighbor’s Silicon. The graphics guys finally got “their space.” They aren’t letting us have any of it any time soon!
Damián pointed out that most advanced AI needs a lot of memory and access to the game state, etc. Those are things that the GPU isn’t really good at. About the only thing that you could do without those things is perhaps flocking. I agree that there really isn’t a lot we can do with the GPU. He did concede that if ATI wanted to do a raycasting chip (rather than borrowing time from the hated graphics guys) , that would be beautiful… but that’s about it.
Director as Designer?

Someone asked about the possibility of seeing the technology that was behind Left 4 Dead’s “AI Director” being used as a sort of game designer, instead. Damián pointed out that the idea of a “meta-AI” has been around for years in academic AI and that it is now really starting to get traction in the game world. I agree that the customized gameplay experience is a major turning point. I really like the idea of this as it really comes down to a lot of underlying simulation engine stuff. That’s my wheelhouse, really.
Where to Now?
They closed with some commentary on the future of game AI which I will leave to you to listen to. Suffice to say that for years, everyone has been expecting more than we have delivered. I’m not sure that is because we are slacking off of because they have vastly underestimated what we have to do as AI designers and programmers. At least, with all the attention that is being brought to it through the Game AI Conference that AIGameDev puts on, the AI Summit now being a regular feature at GDC, and other such events, we are going to be moving forward at some speed.
Hang in there…

Damián Isla Interview on Blackboard Arch.

Thursday, February 11th, 2010
In preparation for the GDC AI Summit (and the inevitable stream of dinner conversations that will be associated with GDC), I have tried to catch up on playing some games and also getting current with papers and interviews. On the later point, Alex Champandard at AIGameDev keeps me hopping. It seems he is almost constantly interviewing industry people on the latest and greatest stuff that is happening in the game AI realm.
A few weeks back, he interviewed Damián Isla about blackboard architectures and knowledge representation. Seeing as I always learn something from Damián, I figured that interview needed to go to the top of the list. Here’s some of my notes as I listen through the interview.
Make it Bigger
Damián’s first point to Alex was that a major benefit of a blackboard architecture was scalability. That is, putting together a rule-based system that performs a single decision is simple… the hard part is when you have 10,000 of those rules going on at the same time.
In a similar vein, he said it was about shared computation. Many of the decisions that are made are used with the same information. Blackboards, to Damian, are an architectural feature that can decouple information gathering and storage from the decisions that are made with that information. If one part of a decision needs to know about a computed value regarding a target, that information could potentially be used by another portion of the decision engine entirely… even for a competing action. By calculating the requisite information once and storing it, the decision algorithms themselves can simply look up what they need.
This is similar to the approach that I espouse in my book. I don’t directly say it, but in a way it is implied. With my approach of compartmentalizing tiers of calculations, many of the individual layers of information that needs to be processed are done so independently of the final decision. The information needs only be collected and tabulated one time, however. The various decision models can simply look up what they need. In a multi-agent system, different agents can use much of the same information as well. While distance to a target may be personal, the threat level of a target (based on weaponry, health, cover, etc.) may be the same for all the enemies. That threat level can be calculated once and saved for everyone to use.
Back to Damián…
He mentioned that at the Media Lab at MIT they used blackboards for many things like logging, testing, and debugging. That is something I haven’t necessarily thought of. They also had observer systems running over a network. They shared parts of the blackboard info so that the fully running games were doing everything but thinking.
Alex reiterated that a blackboard is more of an architectural feature rather than a decision process. Damián confirmed that the history of blackboards involved planners but that we are now using them inside reactive systems as well.
Blackboards vs. Lots of Static Variables
At Alex’s prompting, Damián suggested that the blackboard is far more dynamic than having just many billions of values specified in a character.h file. In fact, he likened it much more to having one unified interface to all of your game data beyond just that of the character in question.
Do all agents need their own unique blackboard?
I like the fact that Damián’s initial answer was a refrain that I repeated throughout my book… “it totally depends on the game you’re making.” Unfortunately, that is a major stumbling block to answering any architectural or procedural question.
He went on to say something similar to what I mention above… that you have to mix them. There are some pieces of information that individuals track and others that are available to the group as a whole. Visibility of targets, for example, is individual. Goals for a squad, on the other hand, is something that can be shared and referenced.
Direct Query vs. Blackboard?
The most important point Damián made here had to do with “redundancy”. If you have a situation that you can guarantee you only need something once, then accessing it directly from a behavior is fine. If multiple behaviors might use the same data, access it once and store it on the blackboard.
The answer to avoiding the redundancy issue was “abstraction”. That’s what a blackboard represents. It gives that intermediate layer of aggregation and storage. He actually referred to it as “sub-contracting” the information gathering out to the blackboard system. The difference is that the blackboard isn’t simply passing on the request for information, it is actually storing the information as a data cache so that it doesn’t need to be re-accessed.
One very important point that he made was that there was some intelligence to the blackboard in deciding how often to ask for updates from the world. This is a huge advantage in that the process of information gathering for decisions can be one of the major bottlenecks in the AI process. LOS checks, for example, are horribly time-consuming. If your system must ask for all the LOS checks and other information every time a BT is run (or multiple times in the same BT), there can be a significant drain on the system. However, if you are able to time-slice the information gathering in the background, the only thing the BT needs to do as access what is on the blackboard at that moment.
Incidentally, this is a wonderful way of implementing threading in your games. If your information gathering can continue on its own timeline and the behaviors need only grab the information that is current on the blackboard, those two processes can be on separate threads with only the occasional lock as the blackboard is being updated with new info.
This goes back to my interview with Alex about a year ago where he asked about the scalability of the techniques I write about in my book. My point was that you don’t have to access all the information every time. By separating the two processes out with this abstraction layer as the hand-off point, it keeps the actual decision system from getting bogged down.
In order to also help facilitate this, Damián spoke of the way that the information gathering can be prioritized. Using LOS checks as his example, he talked about how the Halo engine would update LOS checks on active, armed, enemies every 2 or 3 frames, but update LOS checks on “interesting but not urgent” things like dead bodies every 50 frames. Sure, it is nice for the AI to react to coming around a corner and seeing the body, but we don’t need to constantly check for it.
Compare this to a BT where a node “react to dead body” would be checked along with everything else or (with a different design) only after all combat has ceased and the BT falls through the combat nodes to the “react…” one. At that point, the BT is deciding how often to check simply by its design. In the blackboard architecture, the blackboard handles the updates on what the agent knows and the BT handles if and how it reacts to the information.
Chicken or Egg?
Damián talked about how the KR module and the decision module does, indeed, need to be built in concert since information and decisions are mutually dependent. However, he talked about how the iterative process is inherently a “needs-based” design. That is, he only would write the KR modules necessary to feed the decisions that you are going to be using the information for. This is, of course, iterative design at its very core (and how I have always preferred to work anyway). While you might first identify a decision that needs to be coded, you need to then put much of that on hold until you have put together all of the KR implementation that will feed the decision. If you then add future decision processes that use that same blackboard, more power to you. (None of this trumps the idea that you need to plan ahead so that you don’t end up with a mish-mash of stuff.)
As mentioned before, what you put into the KR blackboard is very dependent on the game design. It goes beyond just what knowledge you are storing, however. Damián specifically mentioned that he tries to put as much “smarts” into the KR level as possible. This has the effect of lifting that burden from the decision process, of course.
Are There Exceptions?
Alex asked the question if there would be cases that a behavior engine (such as the BT) would directly access something in the game data rather than looking it up in the KR/blackboard level. Damián cautioned that while you could make the case for doing that occasionally, you would really have to have a good reason to do so. Alex’s example was LOS checks which, unfortunately, is also the wrong time to step outside of the blackboard since LOS checks are such a bottleneck. Damián’s emphasis was that these sorts of exceptions step outside the “smarts” of the KR system… in this case how the KR was spreading out the LOS checks to avoid spikes.
Another example was pathfinding. He said a developer might be tempted to write a behavior that kicks off its own pathfind routine. That’s generally a bad idea for the same bottleneck reasons.
More than Information
I really liked Damián’s exposition on one example of how Halo used more than simple LOS checks. He explained the concept of “visibility” as defined in Halo where the algorithm that fed the blackboard took into account ideas such as distance, the agent’s perceptual abilities, the amount of fog in the space at any time. This was so much more than a LOS check. All the behaviors in the BT then could use “visibility” as a decision-making criteria. I haven’t seen a copy of the Halo 3 BT, but I can imagine that there were many different nodes that used visibility as an input. It sure is nice to do all of this (including the expensive LOS checks) one time per n frames and simply store it for later use as needed. Again, this is very similar to what I espouse in my book and in utility-based reasoners in general.
Dividing up the KR
He described something interesting about how you could have a manager that assigns how many LOS checks each AI gets and then, once the AI knows how many it will get, the AI then prioritizes its potential uses and divvies them up according to its own needs. Rather than having one manager get requests of priorities from all the AIs at once, the first cut would be to simply give each of them a few (which could also involve some interesting prioritization) and then let them decide what to do with the ones they get. I thought that was a very novel way of doing things.
What Does it Look Like?
In response to a question about what does the blackboard data structure look like, Damián acknowledged that people think about blackboards in two different ways. One is just based on a place to scribble shared data of some sort. The other, more formal notion is based on the idea of key/value pairs. He prefers this method because you can do easy logging, etc. For more high-performance stuff (e.g. FPS) there really isn’t much of a need for key/value pairs so there may be more efficient methods such as looking up the information in a struct.
He went on to point out that the size and speed trade-off is likely one of the more important considerations. If an agent at any one time may only care about 5-10 pieces of data, why set aside a whole 500-item struct in memory? Also, key/value pairs and hash tables can’t necessarily be more expressive than a hard-coded struct. I would tend to agree with this. So much of what the data says is in what it is associated with (i.e. the other elements of the struct) and the code around it.
In Halo, they were on the hard-coded side of things because there wasn’t too much data that they needed to store and access. In general, the KR of what you need to access will tend to stabilize long before the behavior.
He also explained the typical genesis of a new KR routine. Often, it happens though refactoring after you find yourself using a particular algorithm in many locations. If this happens, it can often be abstracted into the KR layer. This is the same thing I have found in my own work.
One caveat he added was extending key/value pairs with a confidence rating in case you wanted to do more probabilistic computations. You could iterate over the information, for example, and apply decay rates. Of course, you could also do that in a hard-coded struct. I was thinking this before he said it. To me, adding the manager to deal with the semantics of key/value/confidence sets might introduce more trouble than it is worth. Why not put together a vector of structs that process the same information? To me, this goes to a point of how you can divide your KR into smaller, specifically-functional chunks.
Intelligent Blackboards
An interesting question led to something that I feel more at home with. Someone asked about blackboards collecting the info, processing some info, and writing that info back to the blackboard to be read by the agent and/or other blackboard processors. Damián agreed that a modular system where multiple small reasoners could certainly be touching the same data store… not only from a read standpoint, but a write one as well. This is very intuitive to me and, in a way, is some of the things that I am doing in Airline Traffic Manager (sorry, no details at this time).
Damián confessed that his search demo from the 2009 AI Summit did exactly this. The process that updated the occupancy map was a module hanging off the blackboard. The blackboard itself was solely concerned with grabbing the data of what was seen and unseen. The reasoner processed this data and wrote it back to the blackboard in the form of the probabilistic mapping on those areas. The agent, of course, looked at that mapping and selected it’s next search location accordingly. (BTW, influence mapping of all kinds is a great use for this method of updating information.)
Meta-Representation
Damián summed up that the overall goal of the blackboard (and, in my opinion KR in general) is that of “meta-representation”. Not that data exists but what that data really means. What it means is entirely dependent on context. The blackboard simply stores these representations in a contextually significant way so that they can be accessed by agents and tools that need to use and respond to that information.
What This Means to Me
I really find this approach important – and am startled that I started using many of these concepts on my own without knowing what they were. One of the reasons that I very much support this work, however, is because it is key to something that I have unwittingly become a part of.
In his article entitled Predictions in Retrospect, Trends, Key Moments and Controversies of 2009! Alex said the following:

Utility-based architectures describe a whole decision making system that chooses actions based on individual floating-point numbers that indicate value. (At least that’s as close to an official definition I can come up with.) Utility in itself isn’t new, and you’ll no doubt remember using it as a voting or scoring system for specific problems like threat selection, target selection, etc. What’s new in 2009 for is:

1. There’s now an agreed-upon name for this architecture: utility-based, which is much more reflective of how it works. Previous names, such as “Goal-Based Architectures” that Kevin Dill used were particularly overloaded already.

2. A group of developers advocate building entire architectures around utility, and not only sprinkling these old-school scoring-systems around your AI as you need them.

The second point is probably the most controversial. That said, there are entire middleware engines, such as motivational graphs which have been effective in military training programs, and Spir.Ops’ drive-based engine applied in other virtual simulations. The discussion about applicability to games is worth another article in itself, and the debate will certainly continue into 2010!

It’s obvious to many people that I am one of the members of that “group of developers” who “advocate building entire architectures around utility”. After all, my book dealt significantly with utility modeling. I am also playing the role of the “utility zealot” in a panel at the 2010 GDC AI Summit specifically geared toward helping people decide what architecture is the best for any given job.

While utility modeling has been often used as a way of helping to sculpt and prioritize decisions in other architectures such as FSMs, BTs, or the edge weights in planners, many people (like Alex) are skeptical of building entire reasoner-based architectures out of them. What Damian explained in this interview is a major part of this solution. Much of the utility-based processing can be done in the KR/blackboard level — even through mutli-threading — to lighten the load on the decision structure. As more of the reasoning gets moved into the KR by simply prepping the numbers (such as how “visibility” represented a combination of factors), less has to happen as part of the decision structure itself.
Kevin Dill and I will be speaking more about this in our lecture at the AI Summit, Improving AI Decision Modeling Through Utility Theory. I hope to be writing more about it in the future as well. After all, according to Alex, it is one of the emerging trends of 2010!

Post-GDC Ramblings

Monday, March 30th, 2009

Well, I’m back and somewhat recovered from GDC. (It always helps to have a day of downtime built into the end of the week.)

From the comments that I and the rest of the participants received, the inaugural AI Summit was well received. I know that all of us were very pleased in not only the presentations that we each delivered but in all of the other ones as well. Apart from a false start at the beginning due to my laptop being under the proverbial weather with a virus, the rest of the two days went off smoothly.

I will post more on my reflections on each of the Summit sessions throughout the week. I did want to touch on a couple of high notes, however. We were very proud (as a group) to be able to deliver such a wide variety of topics. From animation to pathfinding to behavior to knowledge representation to layered goals and multi-threaded architecture, we hit a lot of the key topics. I think this was one of the comments that I heard the most… that there was a little bit of everything. Additionally, many people commented on how we mixed some past techniques with cutting edge stuff and then even some blue sky ponderings (“Human AI” and “Photoshop of AI“. Additionally, people liked the sessions that weren’t specifically technical such as the one on how to get along with designers.

For those that want to take a look at one man’s views on it, Dan Kline did another of his “live blogging” exercises over at his pad, Game of Design. (Day 1 | Day 2)

In other GDC news, After the Summit, much of the week was anti-climactic. There were the 3 normal AI roundtables as well as one run by Alexander Nareyek. I will be posting pictures and audio from the roundtables on this page. You can also check out last year’s stuff here. Eventually, I will have the pictures up from the AI Game Programmers Guild dinner (Sunday) and the regular annual AI Programmers Dinner (Friday) up as well. (Once I saw how dedicated to taking photos Petra Champandard of AIGameDev was, I figured I would let her do most of the shooting. I will link to those pictures as they become available.

Other than that, I only went to three sessions – one of which could actually be co-opted into an AI session. It was on balancing multi-player games. I figure this is an important facet of constructing AI as well for obvious reasons. I went to a roundtable hosted by Ben Sawyer about exploring emerging markets in games.

Peter Molyneux’s lecture on how Lionhead explores experimental stuff was surprisingly lame for a Molyneux talk. I was just really hoping to see more of where they were going right now. I thought it was going to be a sneak peak session. (I should have suspected something when his PR handler was nowhere to be seen.) The only amusing moment was when he almost let out the name of the project… although it is likely no one would have gotten much out of simply a name. Oh well.

I did spend a lot of time on the Expo floor. Much of that time was spent nosing around my publisher’s booth. I guess I sold quite a few books. The GDC store sold out of the 12 that they brought. Additionally, my publisher sold quite a few from their booth. Many of those sales happened while I was there. It took me by surprise to have people ask me to sign their copies. To be honest, it was more of an honor for me to be asked than I figure it was for them to receive a little of my ink. All I asked of them was to post a review out on Amazon when they got done. That would mean a lot to me (and the other people who might be interested in buying it).

Anyway, I plan on writing a bit more after I finally get my laptop cleared up. (Not looking good right now.) If you are coming into this post directly, you may want to check the tags below to see if I have written anything further about the Summit or GDC 2009.

Thoughts Before the GDC AI Summit

Thursday, March 19th, 2009

I have been busily preparing all sorts of stuff at the last minute for the upcoming AI Summit at GDC. Having been involved since the initial discussions started at the last GDC, it has been interesting watching it grow.

The Summit is being put on by the newly formed AI Game Programmers Guild. As such, there are plenty of really sharp people involved. What was very striking, however, was how many times we all made comments expressing how interested we were in going to each others’ sessions! Theoretically, we would put this Summit on for our own benefit even if there were no attendees at all! (Although I believe that the GDC folks would not be terribly pleased by that prospect.) Seriously, we could easily have filled the entire week with the information that we wanted to exchange I, for one, know that I will be at every single AI Summit session with rapt attention. I am even looking forward to hearing what my own co-lecturers, Phil Carlisle and Richard Evans, have to say in our session, “Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters“… and I have already looked at their slides! 

One takeaway from that observation is that we will be talking about a lot of really nifty AI stuff. That much is obvious. Another takeaway, however, is that none of us… even the alleged “experts”… knows everything there is to know about AI. We all want to experience, learn, and expand. That desire comes from the somewhat discomforting awareness that there is a vast expanse of potential laid out before us. As the saying goes, “the more I learn, the more I learn how much I have to learn!” 

I think that will be the underlying theme next week… not just at the AI Summit, but at the entire conference. Sure, there are students and… *ahem*… n00bs at the conference, but there are plenty of seasoned veterans sitting in the audience rather than standing behind the podium or sitting at a panel table. Why? There is plenty more we can do to advance ourselves and, by association, our trade.

Gamasutra Article on Intelligent Mistakes

Wednesday, March 18th, 2009

There’s a nice, if incomplete, article on Gamasutra today by Neversoft’s Mick West titled Intelligent Mistakes: How to Incorporate Stupidity Into Your AI Code. It’s not a new subject, certainly, but what caught my eye was the fact that he used the game of Poker as one of his examples.

In the first chapter of my book, “Behavioral Mathematics for Game AI,” I actually use Poker as a sort of “jumping off point” for a discussion on why decision-making in AI is an important part of the game design. I compare it to games such as Tic-Tac-Toe where the only decision is “if you want to not lose, move here,” Rock-Paper-Scissors where (for most people) the decision is essentially random, and Blackjack where your opponent (the dealer) has specific rules that he has to play by (i.e. < 17 hit, > 17 stand).

Poker, on the other hand (<- that's a joke), is interesting because your intelligent decisions have to incorporate the simple fact that the other player(s) is making intelligent decisions as well. It's not enough to look at your own hand and your own odds and make a decision from that. You have to take into account what the other player is doing and attempt to ferret out what he might be thinking. Conversely, your opponent must have the same perceptions of you and what it is you might be thinking and, therefore, holding in your hand. Thankfully, there is no “perfect solution” that a Poker player can follow due to the imperfect information. However, in other games, there are “best options” that can be selected. If our agents always select the best options, not only do we run the risk of making them too difficult (“Boom! Head shot!) but they also all tend to look exactly the same. After all, put 100 people in the exact same complex situation and there will be plenty of different reactions. In fact, very few of them will choose what might be “the best” option. (I cover this extensively in Chapter 6 of my book, “Rational vs. Irrational Behavior.”

Our computerized AI algorithms, however, excel greatly at determining “the best” something. Whether it be an angle for a snooker shot (the author’s other example), a head shot, or the “shortest path” as determined by A*. After all, that is A*’s selling point… “it is guaranteed to return the shortest path if one exists.” Congratulations… you inhumanly perfect. Therefore, as the author points out in his article, generating intelligent-looking mistakes is a necessary challenge. Thankfully, in later chapters of “Behavioral Mathematics…” I propose a number of solutions to this problem that can be easily implemented.

Anyway, I find it an interesting quandary that we have to approach behavior from both sides. That is, how do we make our AI more intelligentand, how do we make our AI less accurate? Kind of an odd position to be in, isn’t it?

Add to Google Reader or Homepage




Content 2002-2018 by Intrinsic Algorithm L.L.C.

OGDA