IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘Alex Champandard’

Damián Isla Interview on Blackboard Arch.

Thursday, February 11th, 2010
In preparation for the GDC AI Summit (and the inevitable stream of dinner conversations that will be associated with GDC), I have tried to catch up on playing some games and also getting current with papers and interviews. On the later point, Alex Champandard at AIGameDev keeps me hopping. It seems he is almost constantly interviewing industry people on the latest and greatest stuff that is happening in the game AI realm.
A few weeks back, he interviewed Damián Isla about blackboard architectures and knowledge representation. Seeing as I always learn something from Damián, I figured that interview needed to go to the top of the list. Here’s some of my notes as I listen through the interview.
Make it Bigger
Damián’s first point to Alex was that a major benefit of a blackboard architecture was scalability. That is, putting together a rule-based system that performs a single decision is simple… the hard part is when you have 10,000 of those rules going on at the same time.
In a similar vein, he said it was about shared computation. Many of the decisions that are made are used with the same information. Blackboards, to Damian, are an architectural feature that can decouple information gathering and storage from the decisions that are made with that information. If one part of a decision needs to know about a computed value regarding a target, that information could potentially be used by another portion of the decision engine entirely… even for a competing action. By calculating the requisite information once and storing it, the decision algorithms themselves can simply look up what they need.
This is similar to the approach that I espouse in my book. I don’t directly say it, but in a way it is implied. With my approach of compartmentalizing tiers of calculations, many of the individual layers of information that needs to be processed are done so independently of the final decision. The information needs only be collected and tabulated one time, however. The various decision models can simply look up what they need. In a multi-agent system, different agents can use much of the same information as well. While distance to a target may be personal, the threat level of a target (based on weaponry, health, cover, etc.) may be the same for all the enemies. That threat level can be calculated once and saved for everyone to use.
Back to Damián…
He mentioned that at the Media Lab at MIT they used blackboards for many things like logging, testing, and debugging. That is something I haven’t necessarily thought of. They also had observer systems running over a network. They shared parts of the blackboard info so that the fully running games were doing everything but thinking.
Alex reiterated that a blackboard is more of an architectural feature rather than a decision process. Damián confirmed that the history of blackboards involved planners but that we are now using them inside reactive systems as well.
Blackboards vs. Lots of Static Variables
At Alex’s prompting, Damián suggested that the blackboard is far more dynamic than having just many billions of values specified in a character.h file. In fact, he likened it much more to having one unified interface to all of your game data beyond just that of the character in question.
Do all agents need their own unique blackboard?
I like the fact that Damián’s initial answer was a refrain that I repeated throughout my book… “it totally depends on the game you’re making.” Unfortunately, that is a major stumbling block to answering any architectural or procedural question.
He went on to say something similar to what I mention above… that you have to mix them. There are some pieces of information that individuals track and others that are available to the group as a whole. Visibility of targets, for example, is individual. Goals for a squad, on the other hand, is something that can be shared and referenced.
Direct Query vs. Blackboard?
The most important point Damián made here had to do with “redundancy”. If you have a situation that you can guarantee you only need something once, then accessing it directly from a behavior is fine. If multiple behaviors might use the same data, access it once and store it on the blackboard.
The answer to avoiding the redundancy issue was “abstraction”. That’s what a blackboard represents. It gives that intermediate layer of aggregation and storage. He actually referred to it as “sub-contracting” the information gathering out to the blackboard system. The difference is that the blackboard isn’t simply passing on the request for information, it is actually storing the information as a data cache so that it doesn’t need to be re-accessed.
One very important point that he made was that there was some intelligence to the blackboard in deciding how often to ask for updates from the world. This is a huge advantage in that the process of information gathering for decisions can be one of the major bottlenecks in the AI process. LOS checks, for example, are horribly time-consuming. If your system must ask for all the LOS checks and other information every time a BT is run (or multiple times in the same BT), there can be a significant drain on the system. However, if you are able to time-slice the information gathering in the background, the only thing the BT needs to do as access what is on the blackboard at that moment.
Incidentally, this is a wonderful way of implementing threading in your games. If your information gathering can continue on its own timeline and the behaviors need only grab the information that is current on the blackboard, those two processes can be on separate threads with only the occasional lock as the blackboard is being updated with new info.
This goes back to my interview with Alex about a year ago where he asked about the scalability of the techniques I write about in my book. My point was that you don’t have to access all the information every time. By separating the two processes out with this abstraction layer as the hand-off point, it keeps the actual decision system from getting bogged down.
In order to also help facilitate this, Damián spoke of the way that the information gathering can be prioritized. Using LOS checks as his example, he talked about how the Halo engine would update LOS checks on active, armed, enemies every 2 or 3 frames, but update LOS checks on “interesting but not urgent” things like dead bodies every 50 frames. Sure, it is nice for the AI to react to coming around a corner and seeing the body, but we don’t need to constantly check for it.
Compare this to a BT where a node “react to dead body” would be checked along with everything else or (with a different design) only after all combat has ceased and the BT falls through the combat nodes to the “react…” one. At that point, the BT is deciding how often to check simply by its design. In the blackboard architecture, the blackboard handles the updates on what the agent knows and the BT handles if and how it reacts to the information.
Chicken or Egg?
Damián talked about how the KR module and the decision module does, indeed, need to be built in concert since information and decisions are mutually dependent. However, he talked about how the iterative process is inherently a “needs-based” design. That is, he only would write the KR modules necessary to feed the decisions that you are going to be using the information for. This is, of course, iterative design at its very core (and how I have always preferred to work anyway). While you might first identify a decision that needs to be coded, you need to then put much of that on hold until you have put together all of the KR implementation that will feed the decision. If you then add future decision processes that use that same blackboard, more power to you. (None of this trumps the idea that you need to plan ahead so that you don’t end up with a mish-mash of stuff.)
As mentioned before, what you put into the KR blackboard is very dependent on the game design. It goes beyond just what knowledge you are storing, however. Damián specifically mentioned that he tries to put as much “smarts” into the KR level as possible. This has the effect of lifting that burden from the decision process, of course.
Are There Exceptions?
Alex asked the question if there would be cases that a behavior engine (such as the BT) would directly access something in the game data rather than looking it up in the KR/blackboard level. Damián cautioned that while you could make the case for doing that occasionally, you would really have to have a good reason to do so. Alex’s example was LOS checks which, unfortunately, is also the wrong time to step outside of the blackboard since LOS checks are such a bottleneck. Damián’s emphasis was that these sorts of exceptions step outside the “smarts” of the KR system… in this case how the KR was spreading out the LOS checks to avoid spikes.
Another example was pathfinding. He said a developer might be tempted to write a behavior that kicks off its own pathfind routine. That’s generally a bad idea for the same bottleneck reasons.
More than Information
I really liked Damián’s exposition on one example of how Halo used more than simple LOS checks. He explained the concept of “visibility” as defined in Halo where the algorithm that fed the blackboard took into account ideas such as distance, the agent’s perceptual abilities, the amount of fog in the space at any time. This was so much more than a LOS check. All the behaviors in the BT then could use “visibility” as a decision-making criteria. I haven’t seen a copy of the Halo 3 BT, but I can imagine that there were many different nodes that used visibility as an input. It sure is nice to do all of this (including the expensive LOS checks) one time per n frames and simply store it for later use as needed. Again, this is very similar to what I espouse in my book and in utility-based reasoners in general.
Dividing up the KR
He described something interesting about how you could have a manager that assigns how many LOS checks each AI gets and then, once the AI knows how many it will get, the AI then prioritizes its potential uses and divvies them up according to its own needs. Rather than having one manager get requests of priorities from all the AIs at once, the first cut would be to simply give each of them a few (which could also involve some interesting prioritization) and then let them decide what to do with the ones they get. I thought that was a very novel way of doing things.
What Does it Look Like?
In response to a question about what does the blackboard data structure look like, Damián acknowledged that people think about blackboards in two different ways. One is just based on a place to scribble shared data of some sort. The other, more formal notion is based on the idea of key/value pairs. He prefers this method because you can do easy logging, etc. For more high-performance stuff (e.g. FPS) there really isn’t much of a need for key/value pairs so there may be more efficient methods such as looking up the information in a struct.
He went on to point out that the size and speed trade-off is likely one of the more important considerations. If an agent at any one time may only care about 5-10 pieces of data, why set aside a whole 500-item struct in memory? Also, key/value pairs and hash tables can’t necessarily be more expressive than a hard-coded struct. I would tend to agree with this. So much of what the data says is in what it is associated with (i.e. the other elements of the struct) and the code around it.
In Halo, they were on the hard-coded side of things because there wasn’t too much data that they needed to store and access. In general, the KR of what you need to access will tend to stabilize long before the behavior.
He also explained the typical genesis of a new KR routine. Often, it happens though refactoring after you find yourself using a particular algorithm in many locations. If this happens, it can often be abstracted into the KR layer. This is the same thing I have found in my own work.
One caveat he added was extending key/value pairs with a confidence rating in case you wanted to do more probabilistic computations. You could iterate over the information, for example, and apply decay rates. Of course, you could also do that in a hard-coded struct. I was thinking this before he said it. To me, adding the manager to deal with the semantics of key/value/confidence sets might introduce more trouble than it is worth. Why not put together a vector of structs that process the same information? To me, this goes to a point of how you can divide your KR into smaller, specifically-functional chunks.
Intelligent Blackboards
An interesting question led to something that I feel more at home with. Someone asked about blackboards collecting the info, processing some info, and writing that info back to the blackboard to be read by the agent and/or other blackboard processors. Damián agreed that a modular system where multiple small reasoners could certainly be touching the same data store… not only from a read standpoint, but a write one as well. This is very intuitive to me and, in a way, is some of the things that I am doing in Airline Traffic Manager (sorry, no details at this time).
Damián confessed that his search demo from the 2009 AI Summit did exactly this. The process that updated the occupancy map was a module hanging off the blackboard. The blackboard itself was solely concerned with grabbing the data of what was seen and unseen. The reasoner processed this data and wrote it back to the blackboard in the form of the probabilistic mapping on those areas. The agent, of course, looked at that mapping and selected it’s next search location accordingly. (BTW, influence mapping of all kinds is a great use for this method of updating information.)
Meta-Representation
Damián summed up that the overall goal of the blackboard (and, in my opinion KR in general) is that of “meta-representation”. Not that data exists but what that data really means. What it means is entirely dependent on context. The blackboard simply stores these representations in a contextually significant way so that they can be accessed by agents and tools that need to use and respond to that information.
What This Means to Me
I really find this approach important – and am startled that I started using many of these concepts on my own without knowing what they were. One of the reasons that I very much support this work, however, is because it is key to something that I have unwittingly become a part of.
In his article entitled Predictions in Retrospect, Trends, Key Moments and Controversies of 2009! Alex said the following:
Utility-based architectures describe a whole decision making system that chooses actions based on individual floating-point numbers that indicate value. (At least that’s as close to an official definition I can come up with.) Utility in itself isn’t new, and you’ll no doubt remember using it as a voting or scoring system for specific problems like threat selection, target selection, etc. What’s new in 2009 for is:

1. There’s now an agreed-upon name for this architecture: utility-based, which is much more reflective of how it works. Previous names, such as “Goal-Based Architectures” that Kevin Dill used were particularly overloaded already.

2. A group of developers advocate building entire architectures around utility, and not only sprinkling these old-school scoring-systems around your AI as you need them.

The second point is probably the most controversial. That said, there are entire middleware engines, such as motivational graphs which have been effective in military training programs, and Spir.Ops’ drive-based engine applied in other virtual simulations. The discussion about applicability to games is worth another article in itself, and the debate will certainly continue into 2010!

It’s obvious to many people that I am one of the members of that “group of developers” who “advocate building entire architectures around utility”. After all, my book dealt significantly with utility modeling. I am also playing the role of the “utility zealot” in a panel at the 2010 GDC AI Summit specifically geared toward helping people decide what architecture is the best for any given job.

While utility modeling has been often used as a way of helping to sculpt and prioritize decisions in other architectures such as FSMs, BTs, or the edge weights in planners, many people (like Alex) are skeptical of building entire reasoner-based architectures out of them. What Damian explained in this interview is a major part of this solution. Much of the utility-based processing can be done in the KR/blackboard level — even through mutli-threading — to lighten the load on the decision structure. As more of the reasoning gets moved into the KR by simply prepping the numbers (such as how “visibility” represented a combination of factors), less has to happen as part of the decision structure itself.
Kevin Dill and I will be speaking more about this in our lecture at the AI Summit, Improving AI Decision Modeling Through Utility Theory. I hope to be writing more about it in the future as well. After all, according to Alex, it is one of the emerging trends of 2010!

AIGameDev Members Area

Wednesday, October 1st, 2008

For that thin slice of the industry that may visit this blog that doesn’t already know about AIGameDev’s new members area, you are definately going to want to jump over there and check out what’s going on. Alex Chapandard’s pad has been the best place for game AI info over the past year and now he is really stepping it up a notch. As of today, he has started a new members area that will not only have a lot of papers and other research material, he has been lining up a lot of live interviews and workshops with industry experts. Those are conducted online with audio, video and an interactive whiteboard. The few that he has conducted for free so far have been informative. Here’s the schedule of what’s going on for this fall.

Go check out what’s going on over there at the members area launch page. And tell ‘em Dave Mark sent you!

The Challenges of Destructible Cover

Friday, May 9th, 2008

Alex Champandard, at AIGameDev has posted a nice video analysis detailing some of the complicating issues surrounding the inclusion of destructible cover in an FPS game. He uses video from a recent trailer from the upcoming Brothers in Arms 2. As always, Alex details things rather well. He offers an off-the-cuff solution without getting terribly technical. I can understand why he can’t “solve” the problem… it is usually something that is very game and engine specific. Regardless, it shows the issue itself very well.

This reminds me of a conversation that was had at the AI Game Programmers Dinner at the 2008 GDC. There was a brief exchange where we were talking about points of visibility in the games that were represented in the room. Many games tend to use around 6 points… a rectangle representing shoulders and perhaps thighs, one for the center of the body and one for the head. Others may add a few more here or there. I asked Christian Gyrling (Naughty Dog) how many they used in “Uncharted: Drake’s Fortune”… his answer? 20. That’s a LOT of ray casts. Admittedly, this was 20 points on the player’s body to determine if the enemy AIs could see him. However, the result is the same… 20 potential raycasts for each active enemy NPC. Ouch. (Welcome to the PS3, I suppose.)

I would like to think that specialized graphics hardware and simply more processing power will make this approach more cost-effective in the near future.

AIGameDev Column: Beyond Single Frame Decisions

Tuesday, March 18th, 2008

It’s Tuesday again and I have just finished my latest column in the Discussion series at AIGameDev.com.

In this week’s column, “Thinking Beyond Single Frame Decisions“, I wanted to provoke some thought about why it is that AI programmers have often painted themselves in the corner mentality that AI decisions need to be made in a single frame – 20ms or so – even if we have to cut corners on accuracy or depth in order to do so.

As AI programmers, we are forced (or force ourselves) up against the invisible wall of framerates. Our agents must live their lives in 20 millisecond slices -perceiving, pondering, planning and performing must all be arranged in little easily-digestible bites. What’s more, they share their cramped temporal quarters with dozens, scores, or even hundreds of other cohorts – all clamoring for the leftovers that the art department has discarded… and all working under the same 20ms edict. If you can’t decide what to do in 20ms, it isn’t worth doing.

If you can spare the clock cycles, head on over to Alex Champandard’s excellent community, AIGameDev.com. Remember to tap the RSS feed to the discussion column and his many other blog feeds! While you are there, spawn another helper thread and jump into the AI forums as well. (Forum registration is required but is quick and painless. If you are an AI programmer, you’ve dealt with more traumatic experiences than that!)

I have to admit, I am really enjoying writing for Alex and his site. I’m honored that he asked me to be a part of his team.

If you haven’t already done so, make sure you subscribe to IA on AI to keep up with the latest news and notes on game AI!

Top 5 Trends and Predictions for Game AI in 2008

Monday, January 14th, 2008

Another gem from over at what is rapidly becoming our sister-site, AIGameDev.com – this is the result of a discussion that started a few weeks amongst the site regulars.

Top 5 Trends and Predictions for Game AI in 2008

Of the top 5, I’m the most excited about an increase in sandbox games and emergent behaviors. Really, I see these two as almost interlinked. Sandbox games not only allow emergent behavior to proliferate – they almost require it to do so in order to keep immersion.

Likewise, interagent cooperation was another of the top 5 on the list. Again, this is something that I see as related to emergent behavior. If you leave your cooperation loosely defined rather than pre-scripted, you will see a lot of emergent behavior as a result.

I hope to get more a feel about this very topic at the GDC roundtables and lectures next month. That is always a great way to take the pulse of the industry. Anyway, good stuff on the list.

Level Designers trumping AI Programmers

Sunday, January 6th, 2008

I hate glomming on to a blog chain, but I’m going to link to AIGameDev’s article on an article (which may very well be about an article.) The title is Watching Level Designers Use Scripts to Disable Your Autonomous AI: Priceless – which just about covers it. Alex does a nice job of not just reporting on it, but explaining the mindset and even the things to watch out for.

Regular readers of my other blog, Post-Play’em will know that I talked about the idea of scripts over-riding AI behaviors in Call of Duty 2 in a post entitled Call of Duty 2: Omniscience and Invulnerability. Specifically, this was in reference to one of the behaviors mentioned in the other article where an AI agent takes on a temporary god-like quality of invulnerability until such time as he finishes a scripted event – at which time he is no longer important to the level designer’s wishes and is cast back into the pot of cannon fodder so that I can mow him down properly.

Getting back to the initial topic, my thought is that part of the issue between artists/level designers and programmers may very well be that the level designers don’t have a trust in the capabilities of autonomous AI agents… or even and understanding of what could be done with them.

For example, with the use of goal-based agents such as those found in F.E.A.R. (related post), rather than a designer saying “I want the bot to do A then B, then C on his way to doing the final action of D.” he could simply tell the goal-based agent that “D is a damn good goal to accomplish.” If constructed properly, the agent would then realize that a perfectly viable way of accomplishing D would be via A-B-C-D. The difference between these two methods is important. If C is no longer a viable (or intelligent looking) option, then the scripted bot either gets stuck or looks very dumb in still trying to accomplish D through that pre-defined path. The very nature of planning agents, however, would allow the agent to try to find other ways of satisfying D. If one exists, he will find it. If not, perhaps another goal will suffice.

The problem is, while AI programmers understand this concept (especially if you are the one who wrote the planner for that game), level designers and particularly artists, may not have an intuitive grasp on this. They are cut more from the cloth of writers – “and then this happened, and then this, and then it was really cool when I wrote this next thing because I wanted the agent to look smart, and then this…” That is being a writer - and is why many games continue to be largely linear in nature. You are being pulled through an experience on a string of scripted events. (See related post on Doom 3’s scripting vs. AI)

So, can the problem of designers trumping AI programmers be solved? It will always be there to some extent. But education and communication will certainly help the matter.

Behavior Trees

Friday, December 14th, 2007

Time for a taste of the Lyon, France Game Developers Conference!

Alex Champandard at AIGameDev.com posted part 1 of a presentation he gave on the use of behavior trees in game AI.

Seriously good stuff!

(note: there may be a problem viewing the videos with IE – they work fine in Firefox.)

Temporal Coherence and Planning

Tuesday, December 11th, 2007

Alex at AIGameDev has a great essay up entitled “Memento, Temporal Coherence and Debugging Planners“. In it, he talks about how planning algorithms have the problem of having their assumptions about the world fall quickly out of scope as the world changes. One solution is to continually replan from scratch – which can become quite expensive to do for numerous agents.

He offers a couple of solutions – and the comments on the post have turned into a rather interesting discussion on the caveats and possibilities. Check it out!

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content ©2002-2010 by Intrinsic Algorithm L.L.C.

OGDA