IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Euphoria over Bad AI in Backbreaker?

May 26th, 2010

I was looking at a review of the new football game, Backbreaker, by the blogger, Pastpadre, and I found an interesting combination of observations. First, for those that don’t know, Backbreaker is a football game that is developed by NaturalMotion. They are known first and foremost for their Euphoria physics engine that creates contextually realistic human body motion. Seeing as one of the biggest complaints about sports games—and football ones in particular—is that the human body physics begins to look canned and repetitive, you would think NaturalMotion had a bit of a head start in that area. The problem is, that isn’t all people gripe about with football games.

While I commend NaturalMotion for attempting to move things forward in this area, there are plenty of things that need to be addressed, if not solved, if the genre is to advance further. Physics isn’t necessarily on the top of the list. But hey, that’s what they do.

…coordinating 11 people being interfered with by 11 other people is a tall order.

This is particularly close to my heart because I’m an AI designer and a huge football fan. I am especially fond of football because of the deep intricacy of the team-based strategy that has to happen on every play. Of course, this is exactly the issue that is the hardest to address from an AI standpoint. Sports games—again, football ones in particular—are ridiculously hard to craft good AI for. For an industry that struggles to put together FPS squad tactics for 2 or 3 people, coordinating 11 people supposedly working together while being interfered with by 11 other people who are also working together is a tall order. The Madden franchise has been doing a passable job of this for some time. Sure, there are golden paths that bubble to the surface all the time, but those seem to be fewer and farther between.

Anyway, in this review, the author points out some interesting frustrations. He addresses it briefly in the first paragraph but I believe it summarizes things well (emphasis mine):

Reaction has been mixed with most gamers enjoying the Euphoria physics, polarization on the single camera angle, and the troubling CPU AI leading to the most concern.

(Brief aside: Who uses “CPU AI”? Not only is that redundant, it says the same thing twice.)

I will skip over his impressions of how Euphoria works. If you want to know all that happy-happy stuff, you can watch a Euphoria sales reel. I will address the AI-specific stuff. He goes on to comment about some of the specifics of how the AI falls flat on its face (emphasis mine).

The offensive output by the CPU has been pitiful. I’ve yet to give up more than a couple first downs on a single drive and still haven’t been scored on. The biggest reason is that the CPU turns the ball over a lot. In four minutes of gameplay it’s been close to an average of three picks thrown by the CPU. In the final demo video I posted I had three picks in three drives off the best offensive team in the game. That was with me being out of the play in all instances and the CPU just making bad passes.

This is summed up by the clincher which is my point here:

No matter how great the physics are I would not be able to play a football game if the CPU throws 10+ picks each time out.

No matter how great the physics are I would not be able to play a football game…

This certainly seems to be an example of tunnel-vision on a pet feature while ignoring (or being incapable of addressing) the rest of the game that people actually want to experience. Is this a Euphoria sales demo or a football game? This is something that is more prevalent in the game industry that we care to admit. It isn’t just Euphoria—or even physics—as the bad guy either. Swap in “game design”, “story”, “cool weapons”, “sexy chick outfits”, “huge environments” or whatever. AI is often the expression of your world. If your AI is broken, it severs the emotional connection to the game.

He continues:

The CPU goes with a jump pass way too often, whether it be springing forward or backwards, many times resulting in an interception. These aren’t instances where jumping to make a pass even would make some sense as the CPU would have been better off with their feet set.

Again, this apparently is broken decision logic. For those that don’t know football, in the pros a “jump pass” is a rare event only used in certain situations. Commentators will typically hammer on a QB for not throwing with his feet set. In fact, theoretically, you could do a football game without even including an animation for a jump pass and no one would really notice all that much. Therefore, for the author to notice that this happening too often is rather telling.

More:

The CPU defensive back AI has been terrible in instances where they aren’t running in stride. When they continue to run in stride they seem to play the ball pretty well. If they stop (like on a comeback route or a pass lobbed up for grabs) they’ll start to go the wrong way, make a terrible attempt at the ball, or just stand there. Several times I’ve completed passes with multiple defenders in the area who played the ball horribly wrong. They’ve just stood there and watched the ball go over their heads or watched the receiver make an easy catch.

Again, I’m guessing this is either lazyness on the part of the development team, not knowing about football, or an inability to solve the problem. I hope it is the latter. The 2nd one is not acceptable if you are actually making a football game. The first… well…

A few more. Apparently it is not just the player AI that is troubling:

Penalties have been really iffy to say the least. I’ve seen roughing the passer called in multiple situations when it shouldn’t have been. I have seen a pass interference [im]properly called in two instances, called once when there was clearly no interference, and in several other situations seen receivers taken completely out before the ball arrives and no penalty called. There also seems to be an issue with roughing the kicker (primarily on punts) where your CPU controlled players commit the penalty way too often and out of the user’s control. I haven’t seen this one much but it has been widely reported.

So this has to do with the logic for detecting penalty situations. These should be, in effect, simple rule-based systems. For example,

if (contact(player, receiver)
&& !TouchBall(receiver)
&& CatchableBall(receiver))
&& ((BallThrown() || ReceiverDownField())...

If you are botching up your static rule-based systems, then doing the contextual player-reaction AI is going to be a bitch.

Naturally, bad AI tends to lead to exploits:

Exploits have already been found with QB sneaks and the blocking of punts and field goals. These things could really damage the online play experience. The QB sneak problem, combined with the ability to no-huddle because of the lack of fatigue and not having to worry about injuries, could ruin online play. If blocked punts and kicks are prevalent online everyone will end up going for it on 4th downs.

If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”.

This is the natural result… and is always a game-killer. If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”. Many games with bad AI could still thrive in the online world. However, in a game where you only control 9% of your team and are entirely dependent on the other 91% for success, you can still do all the right things and still get rolled. That is not fun.

My point with all of this really has very little to do with the game itself and really less to do with the Euphoria engine. In fact, a quick browse through YouTube shows that there are some people who think the AI is just fine (although watching the videos and descriptions shows that people don’t really know what AI is or what good AI might look like). That being said, your mileage may vary. My point was that of the juxtaposition of the two points the author was making: you need more than pretty physics to make a compelling game.

This is really only a modified version of the graphics vs. AI debates.

This is really only a modified version of the graphics vs. AI debates. Originally, studios made pretty games with bad AI (and even bad physics). Now we seem to have moved on to making better physics… and with products like Euphoria, even better physics that take the load off of AI programmers trying to figure out what human reactions should be. None of that solves stupid AI play, though. And until we do that, we are going to be seeing otherwise decent games get shelved.


Fritz Heckel’s Reactive Teaming

May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.


Raph Koster on Dynamic POIs

May 24th, 2010

Raph Koster, best known as the lead designer of Ultima Online and the creative director for Star Wars Galaxies, wrote an interesting post on his blog at the end of April. In the post, “Dynamic POIs“, He discusses how, in Star Wars Galaxies, they constructed a method for the computer to generate automatic content in the MMO world.

The term POI stands for point of interest and dates back to his UO days. At the time, those POIs were hand-placed by the design team. In a world the size of SWG, this was simply not feasible. He comments how even a room full of junior designers churning these encounters out was simply not enough. The proposed solution was to algorithmically generate these locations and fill them with life.

One of the complicating factors in this was that they weren’t simply placing the D&D staple of a wandering monster. They were generating a camp or facility full of these agents complete with scenery… and plot. Here’s a quote from the article that explains it better.

Don’t roll up just a bandit; roll up a little bandit campsite, with a tent, a campfire, three bandits, one of whom hates one of the others, a young bandit who isn’t actually a bad guy but has been sucked into the life because he has a young pregnant wife at home… In fact, maybe have an assortment of bandits — twenty possible ones maybe. Then pick three for your camp. That way you always get a flavorful but slightly different experience.

So what we are looking at is a random type of encounter with a random population, in a random location. So far that’s pretty groovy. If you want a description of what it was like, SWG producer, Haden Blackman writes about his own encounter with a dynamic POI.

One advantage that they had with SWG that didn’t exist with UO is that the SWG map was procedurally generated in the first place. That made generating a random encounter simply an extension that was over and above the map generation. They could create and destroy these places by simply making sure there was no one around.

…it was just as hard to create the dynamic content as it was to create static content.

In order to provide some variety, these encounters could also be designed modularly. The actors might be different or they may send you on different missions than a similarly constructed encounter elsewhere. The n-complexity of available encounters rises fairly quickly that way, of course.  Unfortunately, Raph says they weren’t quite able to make this whole process completely data-driven. The result is that it was just as hard to create the dynamic content as it was to create static content. In particular, the scripting of the encounters was tricky. The result of all of this is that dynamic POIs ended up coming back out of the game.

He hasn’t written off the entire idea yet, however. Neither have I. In fact, I spoke about it a little at GDC Austin in 2009. (Get my slides here.) Raph was not able to attend that lecture (for which I believe Sheri Graner Ray gave him a tongue-lashing). I don’t pretend to solve the problem of dynamic content to the extent that Raph writes about. The point I addressed was that there are ways for AI-controlled NPCs to dynamically deal with a shifting population and environment that can lead to more expressive encounters game-wide.

Anyway, with the advent of dynamic pacing in Left 4 Dead via the much-referenced AI Director, there is a lot more attention being paid to how we can break out of entirely hand-placed, hand-paced, and often linear content. Additionally, while the public’s appetite for “sandbox” worlds is increasing — thanks in no small part to games like GTA4, the industry can’t sustain GTA4-like budgets of $100 million for very long. If the procedural content issue was solved (or at least furthered along somewhat) then we can satisfy both the demands of our players and the restrictions of budget.

One place to look might be to the work that is being done by the likes of interactive fiction writer and programmer Emily Short and the work that is continually being done by Michael Mateas and the Expressive Intelligence Studio company and UCSC. Someplace in there is a hybrid of dynamic, data-driven storytelling that we can eventually work with to create truly open-ended content.

We can only hope… and work.


Migration Completed

May 24th, 2010

The web host migration and subsequent move from Blogger to WordPress is done.

Make sure you update any subscriptions to this blog back to this URL as the “iaonai.intrinsicalgorithm.com” is going away again.


Migration from Blogger to WordPress

May 1st, 2010

Well, after the migration to the “New Blogger”, I discovered that they won’t support my php includes so that the blog can fit into the rest of my site. Therefore, I am going to ditch Blogger entirely.

After having worked as an author for a couple of other blogs that used WordPress, I figure that it was a decent solution. Therefore, I am going to begin installing WordPress and see if I can get all my existing content over to it. Things may look ugly for a bit but hopefully I can get it taken care of.

Oh… and a big thank you to Google/Blogger for screwing up dozens (if not scores) of hours of my work. Good work, folks.


Blog Migration

April 30th, 2010

Because Blogger is doing away with their FTP publishing as of May 1st, I have to convert this blog over to a new type of system with them. Unfortunately, because I have their custom code embedded in my Intrinsic Algorithm pages, this is likely not going to go smoothly. Hopefully, there will be very little down-time. Please accept our apologies if something goes amiss.


Reddit interviews Peter Norvig

March 2nd, 2010

Reddit interviews Peter Norvig – co-author of the seminal book Artificial Intelligence: A Modern Approach and currently Director of Research (formerly Director of Search Quality) at Google Inc. While not necessarily related directly to game AI, I thought it was interesting anyway.

Basically, the interview is a series of questions that people had submitted ahead of time.
One of the interesting questions was about the difference between “weak AI” and “strong AI” – which involved a definition of terms. He cited the common definition that “strong AI” is human-level problem solving AI. While people aren’t working directly on “strong AI” as such, many people are working on components that may very well lead to it in the future.

He wandered into a description of parallel computing and how it relates to layers of abstractions. I believe this part actually has some relevance to game AI in that we are really only beginning to deal well with splitting out computations into multiple streams. Blackboard architectures and similar ideas really allow for this. (See my commentary on Damián Isla’s interview re blackboards for more.)


Boston All-Stars Weigh in on AI

February 15th, 2010

Back in November, there was a get-together of Boston Post Mortem (billed as “games and grog, once a month”) that had a panel of local AI folks. The panelist’s names are familiar to many of us… Damián Isla, Jeff Orkin, and John Abercrombie. It was moderated by Christian Baekkelund whom I had the opportunity to have dinner with in Phily when I was in town for the GameX Industry Summit. Thankfully, this panel was recorded by Darius Kazemi and posted on his Vimeo site and on the Boston Post Mortem page. I embed it here for simplicity’s sake.

Anyway, a few comments on the video:

You’re Doing it Wrong

The first question to the panel was “what do new AI developers do wrong” or something to that effect. Damián set up the idea of two competing mentalities… gameplay vs. realistic behavior. He and John both supported the notion that the game is key and that creating a system just for the sake of creating it can range anywhere from waste of time to downright wrong.
…create autonomous characters and then let the designers create worlds…
The thing that caught me was Jeff’s response, though (5:48). His comment was that AI teams can’t force designers to be programmers through scripts, etc. That’s not their strength and not their job. While that’s all well and good, it was his next comment that got me cheering. He posited that it is the AI programmers job to create autonomous characters and then let the designers create worlds that let the characters do their thing in.
Obviously, it isn’t a one way street there… the designers job isn’t to show off the AI. However, I like the idea of the designers not having to worry about implementing behavior at all — just asking for it from the AI programmer and putting the result into their world. John’s echo was that it’s nice to build autonomous characters but with overrides for the designers. It isn’t totally autonomous or totally scripted. This sounds like what he told me in his Bioshock experience when I talked to him about it a few years ago.
I happen to agree that the focus needs to be on autonomy first and then specific game cases later. The reason for this is too often the part of the AI that looks “dumb” or “wrong” is when the AI isn’t being told to do anything specific. For example, how often would you see a monster or soldier just standing there? Some of the great breakthroughs in this were from places like Crytek in Far Cry, Crysis, etc. The idea of purposeful-looking idle behaviors was a great boon to believable AI.

The other advantage to creating autonomy first was really fleshed out by Jeff Orkin’s work on F.E.A.R. (Post-Play’em) No more do designers (or even AI programmers) have to worry about telling an agent exactly what it should do in a situation. By creating an autonomous character, you can simply drop them in a situation and let them figure it out on their own. This can be done with a planner like Jeff did, a utility-based reasoner, or even a very detailed behavior tree. Like John said above, all you need to remember is to provide the override hooks in the system so that a designer can break an AI out of its autonomy and specifically say “do this” as an exception rather than hand-specifying each and every action.
What’s in Our Way?
The next question was about “the biggest obstacle” for game AI moving forward. Jeff’s first answer was about authoring tools. This has been rehashed many times over. John expressed his frustration at having to start from scratch all the time (and his jealousy that Damián didn’t have to between Halo 2 and 3).
…to get your AI reviewed well, you need to invest in great animators.
Damián’s comment was amusing, however. He suggested that to get your AI reviewed well and have players say “that was good AI”, you need to invest in great animators. This somewhat reflects my comment in the 2009 AI Summit where I pointed out that AI programmers are in the middle of a pipeline with knowledge representation on one side and animation on the other. It doesn’t do any good to be able to generate 300 subtle behaviors if the art staff can only represent 30 of them.
On the other hand, he reiterated what the other guys said about authoring tools and not having to re-invent the wheel. He supports middleware for the basic tasks like A*, navmesh generation, etc. If we don’t have to spend time duplicating the simple tasks over and over, we can do far more innovation with what we have.
That’s similar to my thought process when I wrote my book, “Behavioral Mathematics for Game AI“. You won’t see a description of FSMs, A*, or most of the other common AI fare in the book. How many times has that been done already? What I wanted to focus on was how we can make AI work better through things other authors haven’t necessarily covered yet. (Ironically, it was Jeff Orkin who told me “I don’t think anyone has written a book like that. Many people need to read a book about that. Heck… I’d read a book about that!” — Thanks Jeff!)
What Makes the Shooter Shot?
The next question (11:45) was about what FPS-specific stuff they have to deal with.
When Halo 3 came out, they could afford fewer raycasts than Halo 2.
Damián talked about how their challenge was still perception models. They really tried to do very accurate stuff with that in Halo. He pointed out that raycasting is still the “bane” of his existence because it is so expensive still. Despite the processors being faster, geometry is far more complex. Alarming note: When Halo 3 came out, they could actually afford fewer raycasts than on Halo 2. Now that sucks! Incidentally, the struggle for efficiency in this area very much relates to Damián’s “blackboard” interview that I wrote about last week.
Interestingly, Jeff argued the point and suggested that cheating is perfectly OK if it supports the drama of the game. I found this to possibly be in conflict with his approach to making characters autonomous rather than scripted. Autonomy is on the “realistic” end of the spectrum and “scripted” on the other. The same can be said for realistic LOS checks compared to triggered events where the enemy automatically detects the player regardless.
John split the difference with the profound statement, “as long as the player doesn’t think you’re cheating, it’s totally OK.” Brilliant.
AI as the Focus or as a Focusing Tool
Supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
In response to a question about what Damián meant about “AI as a game mechanic,” he offered an interesting summation. He said that, from a design standpoint, the AI deciding when to take cover and when to charge is as important as how much a bullet took off your vitality. That is, supporting the overall design of how the game is to be experienced is just as important as the naked math and logic.
He also pointed out that the design of a game character often started out with discussions and examples of how that AI would react to situations. “The player walks into a room and does x and the enemy will do y.” Through those conversations, the “feel” of a type of character would be created and, hopefully, that is what the player’s experience of that type of character would end up being.
In my opinion, a lot of this is accomplished by being able to not only craft behaviors that are specific to a type of enemy (the easy way of differentiation) but also parameterizing personality into those agents so that they pick from common behaviors in different ways. That is, something that all enemies may do at some point or another but different types of enemies do at different times and with different sets of inputs. I went into this idea quite a bit in my lecture from the 2009 AI Summit (Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters) where I talked about incorporating personality into game agents.
The Golden Rules of AI (20:30)
Christian started things off by citing the adage, “it’s more important to not look stupid than to look smart.” No big surprise there.
The player feels good about killing someone if the kill is challenging.
John said that the AI must be “entertaining”. My only problem with this is that different people find different things entertaining. It’s kinda vague. Better to say that the AI must support the design. Both John and Jeff extended this idea by talking about providing a challenge… the player feels good about killing someone if the kill is challenging.
Damián sucked up to my buddy Kevin Dill a little bit my citing a quote that he made in our joint lecture at the GameX Industry Summit, The Art of Game AI: Sculpting Behavior with Data, Formulas, and Finesse. Kevin said that AI programmers must be an observer of life. I certainly agree with this notion… in fact, for years, my little tag at the end of my industry bios has said, “[Dave] continues to further his education by attending the University of Life. He has no plans to graduate any time soon.” In fact, Chapter 2 of my book is titled “Observing the World”… and Kevin was my tech editor. It should be intuitively obvious, even to the most casual observer, that Kevin stole that idea from me! Damián should have cited me in front of a bunch of game developers! So there!

Anyway, Damián’s point was not only how Kevin and I meant it — observing how people and animals do their thing, but also in being a very detailed and critical observer of your AI. There must be a discipline that scrubs out any little hiccup of animation or behavior before they pile up into a disjointed mess of little hiccups.
Jeff agreed to some extend but related something interesting from the development of F.E.A.R. — he said that most development schedules start by laying out the behavior for each type of character and then, if there is time, you go back and maybe try to get them to work together or with the environment, etc. With F.E.A.R., they started from the beginning with trying to work on the coordinated behaviors. With all the shouting and chaos going on with these guys working against you, you don’t notice the little glitches of the individuals quite as much.
Damián backtracked and qualified his comments… not just hunting down everything that is wrong… but rather everything that is wrong that matters.

Look but Don’t Touch
If you are fighting for your life, you don’t notice the details as much.
John brought up an interesting helpful hint. He talked about how, by turning on God mode (invulnerability), you can dispense with the fight for survival and really concentrate on how the AI is behaving. If you are fighting for your life, you don’t notice the details as much.
I have to agree. That’s why when I go to places like E3, I would rather stand back and watch other people play so I can observe what’s going on with the AI. (I always fear that demo people at E3 are going to be put off by my refusal to join in.) This is one of the problems I have when I’m playing for my Post-Play’em articles. I get so caught up in playing that I don’t look for things anymore. One solution is to have a house full of teenagers… that gives you ample time to just sit and watch someone else play games. I recommend it to anyone. (I believe John and Damián have started their respective processes of having teenagers… eventually.)
Emergence… for Good or Evil
If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting.
In response to a question asking if AI has ever done anything unexpected, Damián spoke about how the sacred quest of emergence isn’t always a good thing. He said that emergent behavior must fall out of the AI having the freedom to do things. If the AI can have the freedom to accidentally do something cool and fun, it also has the freedom to do something dumb or uninteresting. Because of that, emergence has a really high cost in that it can be more of a drag on gameplay than the occasional gem it might produce.
Christian qualified the question a little more by asking if there was a time when the emergent behavior was found well before ship and the team thought it was something to encourage. He cited examples from his own experience. John talked about emergent gameplay that resulted from simple rules-based behavior in Bioshock. You could set a guy on fire, he would run to water, then you could shock the water and kill him. Was it necessary? No. Was it fun for the player to be creative this way? Sure.
To Learn or Not to Learn?
One question that was asked was about using machine learning algos. Christian started off with a gem about how you can do a lot of learning just by keeping track of some stats and not resorting to the “fancier stuff.” For example, keeping track of pitches the player throws in a baseball game doesn’t need a neural network. He then offered that machine learning can, indeed, be used for nifty things like gesture recognition.
Oh… are you using neural networks?
Jeff admitted that he hates the question that comes up when people find out he does AI in games. They often ask him, “Oh… are you using neural networks?” I think this really hits home with a lot of game AI folks. Unfortunately, even a cursory examination of the AI forums at places like GameDev.net will show that people still are in love with neural networks. (My quip is that it is because The Terminator claims to be one so it’s real sexy.) Thankfully, the AIGameDev forum is a little more sane about them although they do come up from time to time. Anyway, Jeff said he has never used them — and that he’s not even sure he understands them. While he thinks that NNs that are used in-game like in Creatures or Black & White are cool, they are more gimicky and not as useful with the ever-increasing possibility space in today’s games.
I Have a Plan
It is hard to accommodate the ever-changing goals of the human players.
Jeff acknowledged that the planner revolution that he started has graduated to hierarchical planners to do more complex things like squad interaction. However, one big caveat that he identified was when you incorporate humans into the mix alongside the squad. It is hard to accommodate the ever-changing goals of the human players.
This, of course, brought us back to the idea of conveying intent — one of the nasty little problem spaces of game AI. Damián explained it as a function of interpretation rather than simply one of planning. I agree that this is going to be a major issue for a long time until we can crack the communication barrier such as voice recognition and natural language processing. Until we can tell our AI squad-mates the same thing we can tell our human squad-mates and expect a similar level of understanding, we are possibly at an impasse.
Moving Forward by Standing Still
As the worlds become more complicated, we have to do so much more work just to do the same thing we did before.
Someone asked the panel what sort of things that they would like to see out of smaller, indie developers that might not be able to be made by the bigger teams. To set the stage, Damián responded with a Doug Church quote from the first AIIDE conference. Doug said that we have to “move forward by standing still.” As the worlds become more complicated, we have to do so much more work just to do the same thing we did before. (See Damián’s note about the LOS checks in Halo 2 and 3 above.) Damián suggested that the indie space has more opportunities to move forward with this because they aren’t expected to do massive worlds. Instead, they can focus on doing AI-based games.

This is what Matt Shaer’s interview with me in Kill Screen magazine was going after. With my Airline Traffic Manager as one of his examples, he spoke specifically about how it is the smaller, dedicated developer who is going after the deeper gameplay rather than the bigger world. I hope this is the case. We have seen a few examples so far… there are likely more to come.
Jeff cited interactive storytelling as a possible space for development in this area as well. This is what we are after with one of our sessions at the 2010 AI Summit when Dan Kline, Michael Mateas, and Emily Short deliver AI and Interactive Storytelling: How We Can Help Each Other.
AI on the GPU?
Someone mentioned seeing a demo of improved LOS checks by using the GPU and asked if AI programmers should be pushing that more. John somewhat wittily suggested that AI programmers won’t be using the graphics chips until someone can find a system where a graphics chip isn’t being used at all. Damián was a little more direct in saying that the last people he wanted to work in conjunction with was the graphics programmers. This reminds me of a column I wrote on AIGameDev a few years back, Thou Shalt Not Covet Thy Neighbor’s Silicon. The graphics guys finally got “their space.” They aren’t letting us have any of it any time soon!
Damián pointed out that most advanced AI needs a lot of memory and access to the game state, etc. Those are things that the GPU isn’t really good at. About the only thing that you could do without those things is perhaps flocking. I agree that there really isn’t a lot we can do with the GPU. He did concede that if ATI wanted to do a raycasting chip (rather than borrowing time from the hated graphics guys) , that would be beautiful… but that’s about it.
Director as Designer?

Someone asked about the possibility of seeing the technology that was behind Left 4 Dead’s “AI Director” being used as a sort of game designer, instead. Damián pointed out that the idea of a “meta-AI” has been around for years in academic AI and that it is now really starting to get traction in the game world. I agree that the customized gameplay experience is a major turning point. I really like the idea of this as it really comes down to a lot of underlying simulation engine stuff. That’s my wheelhouse, really.
Where to Now?
They closed with some commentary on the future of game AI which I will leave to you to listen to. Suffice to say that for years, everyone has been expecting more than we have delivered. I’m not sure that is because we are slacking off of because they have vastly underestimated what we have to do as AI designers and programmers. At least, with all the attention that is being brought to it through the Game AI Conference that AIGameDev puts on, the AI Summit now being a regular feature at GDC, and other such events, we are going to be moving forward at some speed.
Hang in there…

Promised AI Count in Halo Reach

February 14th, 2010

I was looking at my daily barrage of Google alerts on “game AI” (which tend to contain an annoying number of links to stories about Alan Iverson) and this article blurb from the Bitbag happened to catch my eye. It’s a preview of Halo Reach and seems to be a fairly thorough treatment. They talk about a lot of the different gameplay elements and how they differ from prior games in the franchise. They go into great detail about a lot of things. There was only a little bit of info about the AI, however. It said:

Bungie wants this game to feel a lot like Combat Evolved. They want Reach to be filled with open environments filled with enemies and allow you to figure out how you want to deal with the situation. There will be corridor battles like what we’ve seen in past Halos, but that will be balanced with the terrain of Reach. Reach will have a full weather system as well as Bungie saying they will have “40 AI and 20 vehicles” on screen at a time.

I thought that was kind of interesting simply because my first reaction was “is that all?” After a moment’s reflection, I realized that the number of AI on the screen in prior Halo games – and even in other shooters – is usually along the lines of a dozen… not 2 score.

On the other hand, it a game like Assassin’s Creed (my Post-Play’em observations), there were plenty of AI on-screen. However, the majority of them were just the citizens who, for the most part, weren’t doing much beyond animation and local steering (until you engaged them for some reason). The question about Bungie’s promise above, then, is how much level of detail scaling will there be with those 40 on-screen AI characters?

Typical LOD scaling has a tiered system such as:
  • Directly engaged with the player
  • On-screen but not engaged
  • Off-screen but nearby
  • On-screen but distant
  • Off-screen and distant

Each of those levels (in order of decreasing AI processing demand) has a list of things that it must pay attention to or a different timer for how long to wait between updates. How much of this is Bungie tapping into with Reach, or all they all running at the same LOD?
I know that the AI guys at Bungie are pretty sharp and go out of their way to pull of nifty stuff. In fact, ex-Bungie AI lead, Damián Isla just did an interview with AIGameDev on blackboard architectures (my writeup) where he explained how a lot of the Halo 2 and 3 AI was streamlined to allow for better processing of many characters. I’m quite sure that much of that architecture survives in Halo Reach.
Anyway, I just thought it was interesting to see the promised numbers. It will be even more interesting to see how the marketing blurb actually plays out on the screen in the final product.

Random Map Generation for Warzone 2100

February 12th, 2010

I ended up on this page via a tweet as I was ingesting my morning caffeine (meaning it was a sort of idle, random click). What I found, however, was strangely compelling. The game, Warzone 2100, seems to be a small indie project of some sort having to do with finding oil wells as resources in a post-apocalyptic world. The page I was linked straight into, however, was specifically about their random map generation tool. Specifically, I read the “about” page on a piece called “Diorama” and how it works. Interesting stuff.

The page starts out with this blurb:

This article is part technical documentation, part feature list and also part FAQ. It intends to explain why Diorama was written the way it was, why `it takes so long’ and what it can do that would be extremely hard for other random map generators to achieve.

I had to wonder if that was meant to be pretentious or not. After all, random map generation has been around for quite some time – and with excellent results in some cases (e.g. I was never disappointed with a random map in Empire Earth).

Anyway, it steps through the procedures that they use for generating the random terrain, complete with cliffs, etc., adding roads, textures, and interesting features.
The most important thing that they emphasize (IMHO) is how they transition from the blocky “first pass” into a more natural looking layout. Both for cliffs and roads they used the word “jitter” somewhat often to explain how they fuzzied things up. The before and after shots of this effect shows how much of an effect it has.
I think one of the more compelling mentions on this page, however, was that they are attempting to use answer set programming (ASP) to address the initial set-up the starting locations for players and the oil wells. From their description:

ASP is a declarative approach to solving search problems, so you write a description of the problem (in a logic like language) and then give this description to a solver (kind of like a theorem prover) which will come up with valid model (solution) of the problem. But not just any solution, we will arrive at the optimal solution, and we can prove that the solution is optimal. The down side is that generating this can take exponential time (this is why requesting very large maps can take a while) but it allows both local (“a base must have a given number of entrances”) and global (“every base must be able to reach every other base”) conditions on the map to be expressed simply, cleanly and efficiently. Critically we don’t have to change any of the algorithms when the conditions on the map change, we just change the description that gets input into the solver.

I don’t know if that is necessarily the best approach for this. Does it work? Probably. Is it overkill? Maybe. They provide other descriptions of methods that could be used (they even mention genetic algorithms) prior to offering ASP as a solution. I think, however, that there were some serious gaps in that list. I don’t want to get too deep into how I would tackle the problem… after all, I still don’t have enough caffeine in me. That’s not the point of this anyway… I just want to give kudos to they for looking into novel solutions.

Anyway, check it out. It’s an interesting read.

Add to Google Reader or Homepage




Content 2002-2018 by Intrinsic Algorithm L.L.C.

OGDA