Ok… I’ve known about this for about 6 months (since I was in on the original planning phases) but, because things are finally official, I figure it is time to make the announcement here.
The new AI Game Programmers Guild–of which I am a founding member–is putting on a 2-day AI Summit at the 2009 Game Developers Conference. We have a lot of great people putting together 14 hours worth of lectures and panels on the current state of game AI as well as our vision of its future.
For that thin slice of the industry that may visit this blog that doesn’t already know about AIGameDev’s new members area, you are definately going to want to jump over there and check out what’s going on. Alex Chapandard’s pad has been the best place for game AI info over the past year and now he is really stepping it up a notch. As of today, he has started a new members area that will not only have a lot of papers and other research material, he has been lining up a lot of live interviews and workshops with industry experts. Those are conducted online with audio, video and an interactive whiteboard. The few that he has conducted for free so far have been informative. Here’s the schedule of what’s going on for this fall.
If we are going to have an AI that’s tagging along behind us for hours on end, wouldn’t it be better for us to love him/her/it? Let’s face it, if you are playing 10 or 20 hours of game content, any form of repetitive AI may have you digging through the manual for scouring cheat codes online in order to find the “slap your sidekick upside the head” control. You can’t simply get away with seven seconds… or even 5 minutes of believable behavior. Beyond that, the sidekick needs to be more than just something you are entertained and amused by. You need to be able to depend on it… as if it were your lifelong partner.
The ensuing discussion spurred many great comments. Take a gander at it and chime in with your opinion (or a solution?).
Another installment of my Developer Discussion column at AIGameDev.com. In Can Beavis and Butthead Improve Your Game Development Methodology, I reminisce a little about the radio call-in show “Rockline”. Many of the callers asked the bands how they approached writing their songs… lyrics or music first? I turn that question into one to ask of game developers… and specifically game AI programmers. How do we write our code? Bottom up or top down? Somewhere in between? Read the whole column to see how it all played out.
I used a metaphor of a Swiss Army Knife… lots of cool tools that mostly go unused. Here’s a blurb from the intro.
Looking through web sites, books, and the various conferences such as GDC and AIIDE, there is an endless parade of esoteric, seemingly mystical techniques. As perpetual students in our rapidly-changing art, we read and attend with a reverent demeanor of an exploratory scientist. We soak up all the knowledge and ponder the applications and implications. We engage in heady, philosophical discussions with our peers. We exclaim our exuberance and proclaim our allegiance to new methodologies. And then, upon returning home to our individual, pragmatic realities. We resign ourselves to the relatively bland, yet utilitarian knife and screwdriver: Finite State Machines and Pathfinding.
And what of the other tools in the Swiss Army Knife of game AI? What about the planning and fuzzy logic? The lofty towers of neural networks and genetic algorithms? Game theory and reasoning under uncertainty? Influence maps? Minimax plays a killer game of Tic Tac Toe, right? Flocking? We’ve all seen articles on flocking! Not being used? Wow… there sure are a lot of tools in this knife. We can all see places where they may come in handy. Granted, some of them may be like trying to cut firewood with a 2-inch saw – but aren’t some of them truly useful? So why don’t we use them in the real world of creating our pretend worlds rather than simply pretending we are going to use them in the real world?
Jump on over to AIGameDev and read the full column. And, since it is a developer discussion column, please take a moment to continue the poll and post a comment.
Is this something that needs to be explored better, however? And what are some potential solutions to find things that are not there, make sure that behaviors fall within parameters, or look reasonable? And most importantly, how do we make sure that we have explored all the dark nooks and crannies of the potential state space at the far reaches of that combinatorial explosion to make sure that our delicate cosmic balance doesn’t get sucked into an algorithmic black hole?
In my article from this last week, I touched on the furor surrounding the $100-million behemoth that is GTA 4… and how, even with that massive budget, one of the bigger gripes about the game is the AI.
Sandbox games – or at least free-roaming RPGs – are becoming more and more prevalent of late. With the likes of the GTA series, Assassin’s Creed, the Fables, or Saint’s Row, the latest cool thing to do is develop a massive open world where the plot is almost reduced to a mild suggestion. But, there are recurrent themes of developmental difficulty in those projects.
Is it possible for us to do a reasonable job on the AI of “sandbox”-style games? If so, how do we go about it?
Please read the full articles and comment over there… there are already some discussions surrounding my typically controversial topics.
Time again for my weekly Developer Discussion column at AIGameDev.com. This issue is based off of some observations I made at the 2008 GDC. I was curious about how many AI programmers didn’t know each others sub-field of AI. Sure, the field is getting bigger and therefore more specialization is needed in things such as animation AI, etc. However, I was concerned about how many people would say something to the effect of “I don’t do simulations.”
Why aren’t people more interested in using “simulation” techniques in the AI of individual characters? It seems to me that the concepts that make up – or at least underlie – simulation would be the spells that we could all cast. Everything we as AI programmers do should be based on the idea of simulating something.
This time, I invoke Chris Hecker and his speculation on whether or not we will find “the Photoshop of AI”. To quote from the column:
In his lecture entitled Structure vs. Style, he pointed out that the cusp in graphics presentation technology was when the atomic building block was settled on – that of the the texture-mapped triangle. [However], there is a major component of the game experience that hasn’t yet found the spark for it’s Big Bang of development such as the one that occurred in the graphics universe. Game AI has yet to find that “one thing” that their world can be reduced to. And, as such, there can be no “Photoshop for AI”, as Chris put it. Yet.
If you are an AI developer or even just interested in game AI, please wander over and read the whole column… and then weigh in on the subject. What is that single core component that we need in order for AI to make the leap to the next level? Does it even exist?
This time, I touch on the concept of Chaos Theory and how the AI buzzword of emergent behavior is actually cut from the same cloth. They are both entirely deterministic in that they are composed of a finite set of distinct rules – and yet their strength (and weakness) is in that they look complex… even to the point of looking random at times.
But is this good or bad? To pull a brief quote from the column:
So our agent-based models are really an implementation of Chaos Theory. That is, they are both complex systems that result entirely deterministically from relatively simple models. However, as Jurassic Park so elegantly portrayed for us, even deterministic models can spin wildly out of control. There are plenty of examples of very simple systems whose results can vary widely – almost looking â€śbrokenâ€ť simply because of the interaction of those simple rules.
And that is the rub. That is the beast that waits below the surface to reach up and wrap itâ€™s combinatorial tentacles around our placid simulation and drag it down into the abyss of scathing reviews. And we never know if and when it will strike. Perhaps the name â€śChaos Theoryâ€ť, although not an appropriate term for describing the system itself, was an appropriate one after all for describing the potential results of that system.
Read the whole column over at AIGameDev.com. And please, these are discussion columns. If you have a comment, by all means leave it!
The title of this column, “Does This Mistake Make Your AI Look Smarter?“, is a tongue in cheek way of pointing out that there are questions about our AI that are really difficult for us to answer honestly. Skipping over the amusing analogy at the beginning of the column, this is what the crux of it is:
…since the inception of game AI, we have been trying to have our agents make better decisions, not worse ones. We have been trying to eliminate stupid behaviors, not encourage them. We have been striving for more realism, not lessâ€¦ but wait a minute… Isn’t this where we might be lying to ourselves? And do we really want to hear the answer if it means we have been wrong all along?