IA on AI

Posts Tagged ‘automated testing’

Phil Carlisle on Automated Game Testing

Sunday, July 27th, 2008

At the recent Paris Game AI Workshop put on by AIGameDev (et al), Phil Carlisle spoke about automated testing in game development. His major point was that, by decoupling the game engine from the display engine, you can actually run the core simulation very rapidly. It also takes far less processing power. You can run hundreds, thousands, or even millions of simulations during overnight hours, for example. By logging the resultant data, it exposes glitches and imbalances that you may have overlooked.

Another important part of this method of development is that it allows the engine developers to work on things long before it is ready to go on-screen. Some types of games can be “viewed” if not actually “played” on a regular DOS window. This is what I did in my initial development of Airline Traffic Manager. I was able to do a lot of work on the core algorithms such as the passenger generation, moving the aircraft across the map (in Great Circle routes!), the basic state machine interactions between things such as the aircraft and the gates, etc. When it came time to hook up the display, the game engine was ready to roll.

This also reflects back to a column I wrote on AIGameDev back at the end of April – Automated AI Testing: Unraveling the Combinatorial Explosion. I recommend checking that out for a bit more insight. (Or just my rambling.)

2 AIGameDev Columns

Friday, May 9th, 2008

Because of the web site issues, I didn’t announce my last two weekly Developer Discussion column at AIGameDev.com. After having to take a week off (where Alex filled in for me), I wrote Automated AI Testing:Unraveling the Combinatorial Explosion wherein I asked how we can legitimately go about performing tests on our AI code.

Is this something that needs to be explored better, however? And what are some potential solutions to find things that are not there, make sure that behaviors fall within parameters, or look reasonable? And most importantly, how do we make sure that we have explored all the dark nooks and crannies of the potential state space at the far reaches of that combinatorial explosion to make sure that our delicate cosmic balance doesn’t get sucked into an algorithmic black hole?

In my article from this last week, I touched on the furor surrounding the $100-million behemoth that is GTA 4… and how, even with that massive budget, one of the bigger gripes about the game is the AI.

Sandbox games – or at least free-roaming RPGs – are becoming more and more prevalent of late. With the likes of the GTA series, Assassin’s Creed, the Fables, or Saint’s Row, the latest cool thing to do is develop a massive open world where the plot is almost reduced to a mild suggestion. But, there are recurrent themes of developmental difficulty in those projects.

Is it possible for us to do a reasonable job on the AI of “sandbox”-style games? If so, how do we go about it?

Please read the full articles and comment over there… there are already some discussions surrounding my typically controversial topics.