IA on AI

Posts Tagged ‘Brainworks’

Bots with no where to go?

Monday, February 23rd, 2009

Ted Vessenes over at Brainworks posted an interesting little observation about his Quake 3 bot and FPS AI in general. Here’s a quick quote:

bots use items as the primary motivating factors for deciding where to go. The pathing and routing code will tell you how to get from point A to point B, but all of that is meaningless if you don’t know what point B is. The typical strategy is for the bot to pick up the items that help it the most and require the least amount of movement. If the bot can’t find any enemies, it will head to the nearest generally useful item and hope a target wanders by. Item placement is the core component of goal selection in BrainWorks.

While I’m cool with all of that and can see the point he is trying to make, I don’t necessarily agree with the following (emphasis mine).

When two human players are in a level that contains no items, however, they don’t get confused at all. Instead they strategically run and hide around the map, using cover to create good shots on the opponent while dodging their return fire. What BrainWorks is missing (and all first person shooter AI bots for that matter) is this dynamic tactical movement. You just don’t notice it’s missing from BrainWorks because of the item pickup code.

It’s very difficult to design AI that can recognize and avoid dangerous areas of terrain while simultaneously taking advantage of opponents in those spots. But it’s not impossible.

If that is the design of the bot, certainly that is a problem. This used to be a problem in FPS games, but I don’t believe it is as much any more. Most AI isn’t built in the same purely functional framework as a bot. AI is also designed to ‘do nothing’ in a reasonably convincing fashion. Many games now have plenty of things for the AI to do when it is idle or lacking any obvious goals. Games such as STALKER (I refuse to put the periods in there), Far Cry, Crysis, Far Cry 2, and even Left 4 Dead, have many idle behaviors.

Specifically regarding tactical movement, however, many of the recent generation of FPS games have such features as seeking and using cover, finding adequate fire points, etc. Ted addresses this somewhat by suggesting potential algorithms to use:

Here’s the basic algorithm I have in mind. The first objective is to create a “danger map” for the level. The danger map estimates how tactically risky it is to be in each area of the level. Note that “area” needs to be relatively small– roughly one to two square meters on the ground. Too much larger and the algorithm will get muddied, mistaking good regions for bad. Too much larger and the computation will become prohibitive.

He is correct in his suggestion. Much of the tactical awareness of an AI agent is done with either subdividing the map or by using many pre-positioned cover points. Many variations of these techniques are already in use. AI Game Programming Wisdom 3, for instance, had a number of exceptional articles on FPS tactics. Damian Isla (Halo 2 and 3) wrote about Probabilistic Target Tracking and Search Using Occupancy Maps, Remco Straatman and William van der Sterren (CGF-AI) wrote a brutally cool article on pretty much what Ted defines above entitled Dynamic Tactical Position Evaluation, and Christian J. Darken and Gregory H. Paull wrote Finding Cover in Dynamic Environments which even included the height and visibility of terrain. And those three articles were back to back in that book!

That is why I wholeheartedly disagree with Ted’s statement:

It’s far too hard to analyze the geometry of a level to create a danger map, although some simple techniques could be used for a first pass analysis.
This all sounds well and good on paper, but if it were actually that easy, it would have been done already. You might be wondering what the catch is.

That’s simply not true. Sure, it helps to have some pre-processing done, but it can be done without as well.

Anyway, Ted is usually on his game over there at Brainworks. (I’m hoping to get his butt over to GDC so I can meet him!) However, on this one he might have been outside his comfort zone. After all, writing a bot isn’t exactly the same as writing AI.

The Importance of the Right Numbers

Monday, April 7th, 2008

Wow… back-to-back greats posts from Ted Vessenes on his blog, Brainworks. In this one, he writes about a concept very close to my heart – that of mathematical balancing. In the post, Getting it Just Right he mentions two types of parameters – sensitive and insensitive numbers. From the post:

A sensitive number is extremely hard to tweak, because you won’t see the right behavior until you get things “just right”. If the value doesn’t encode the right concept, then that “just right” state won’t exist, but you’ll never know that. You’ll just see how all kinds of different values don’t work in different ways.

And an insensitive number generally just has an impact when it crosses an important boundary (for example, driving 56 instead of 55, or your gas tank being 0% full instead of 1% full). There’s often no indication where this interesting numerical boundary might be.

I mentioned on a comment I posted there that I wished he had written that a few weeks ago. The ideas he presents map very well into a couple of my recent columns over at AIGameDev. One, on Chaos Theory and Emergent Behavior hits a similar nerve as far as tiny changes having big ramifications. The other was about intelligent-looking errors. Both are applications of trying to get those parameters into the ever-elusive “sweet spot” – both in the short term (believability) and the long term (stability). I would have liked to quote him on those columns.

I’m definitely keeping an eye on Ted’s blog from now on!