IA Logo

IA Information

Dave Mark's books

IA on AI

Posts Tagged ‘knowledge representation’

Damián Isla Interview on Blackboard Arch.

Thursday, February 11th, 2010
In preparation for the GDC AI Summit (and the inevitable stream of dinner conversations that will be associated with GDC), I have tried to catch up on playing some games and also getting current with papers and interviews. On the later point, Alex Champandard at AIGameDev keeps me hopping. It seems he is almost constantly interviewing industry people on the latest and greatest stuff that is happening in the game AI realm.
A few weeks back, he interviewed Damián Isla about blackboard architectures and knowledge representation. Seeing as I always learn something from Damián, I figured that interview needed to go to the top of the list. Here’s some of my notes as I listen through the interview.
Make it Bigger
Damián’s first point to Alex was that a major benefit of a blackboard architecture was scalability. That is, putting together a rule-based system that performs a single decision is simple… the hard part is when you have 10,000 of those rules going on at the same time.
In a similar vein, he said it was about shared computation. Many of the decisions that are made are used with the same information. Blackboards, to Damian, are an architectural feature that can decouple information gathering and storage from the decisions that are made with that information. If one part of a decision needs to know about a computed value regarding a target, that information could potentially be used by another portion of the decision engine entirely… even for a competing action. By calculating the requisite information once and storing it, the decision algorithms themselves can simply look up what they need.
This is similar to the approach that I espouse in my book. I don’t directly say it, but in a way it is implied. With my approach of compartmentalizing tiers of calculations, many of the individual layers of information that needs to be processed are done so independently of the final decision. The information needs only be collected and tabulated one time, however. The various decision models can simply look up what they need. In a multi-agent system, different agents can use much of the same information as well. While distance to a target may be personal, the threat level of a target (based on weaponry, health, cover, etc.) may be the same for all the enemies. That threat level can be calculated once and saved for everyone to use.
Back to Damián…
He mentioned that at the Media Lab at MIT they used blackboards for many things like logging, testing, and debugging. That is something I haven’t necessarily thought of. They also had observer systems running over a network. They shared parts of the blackboard info so that the fully running games were doing everything but thinking.
Alex reiterated that a blackboard is more of an architectural feature rather than a decision process. Damián confirmed that the history of blackboards involved planners but that we are now using them inside reactive systems as well.
Blackboards vs. Lots of Static Variables
At Alex’s prompting, Damián suggested that the blackboard is far more dynamic than having just many billions of values specified in a character.h file. In fact, he likened it much more to having one unified interface to all of your game data beyond just that of the character in question.
Do all agents need their own unique blackboard?
I like the fact that Damián’s initial answer was a refrain that I repeated throughout my book… “it totally depends on the game you’re making.” Unfortunately, that is a major stumbling block to answering any architectural or procedural question.
He went on to say something similar to what I mention above… that you have to mix them. There are some pieces of information that individuals track and others that are available to the group as a whole. Visibility of targets, for example, is individual. Goals for a squad, on the other hand, is something that can be shared and referenced.
Direct Query vs. Blackboard?
The most important point Damián made here had to do with “redundancy”. If you have a situation that you can guarantee you only need something once, then accessing it directly from a behavior is fine. If multiple behaviors might use the same data, access it once and store it on the blackboard.
The answer to avoiding the redundancy issue was “abstraction”. That’s what a blackboard represents. It gives that intermediate layer of aggregation and storage. He actually referred to it as “sub-contracting” the information gathering out to the blackboard system. The difference is that the blackboard isn’t simply passing on the request for information, it is actually storing the information as a data cache so that it doesn’t need to be re-accessed.
One very important point that he made was that there was some intelligence to the blackboard in deciding how often to ask for updates from the world. This is a huge advantage in that the process of information gathering for decisions can be one of the major bottlenecks in the AI process. LOS checks, for example, are horribly time-consuming. If your system must ask for all the LOS checks and other information every time a BT is run (or multiple times in the same BT), there can be a significant drain on the system. However, if you are able to time-slice the information gathering in the background, the only thing the BT needs to do as access what is on the blackboard at that moment.
Incidentally, this is a wonderful way of implementing threading in your games. If your information gathering can continue on its own timeline and the behaviors need only grab the information that is current on the blackboard, those two processes can be on separate threads with only the occasional lock as the blackboard is being updated with new info.
This goes back to my interview with Alex about a year ago where he asked about the scalability of the techniques I write about in my book. My point was that you don’t have to access all the information every time. By separating the two processes out with this abstraction layer as the hand-off point, it keeps the actual decision system from getting bogged down.
In order to also help facilitate this, Damián spoke of the way that the information gathering can be prioritized. Using LOS checks as his example, he talked about how the Halo engine would update LOS checks on active, armed, enemies every 2 or 3 frames, but update LOS checks on “interesting but not urgent” things like dead bodies every 50 frames. Sure, it is nice for the AI to react to coming around a corner and seeing the body, but we don’t need to constantly check for it.
Compare this to a BT where a node “react to dead body” would be checked along with everything else or (with a different design) only after all combat has ceased and the BT falls through the combat nodes to the “react…” one. At that point, the BT is deciding how often to check simply by its design. In the blackboard architecture, the blackboard handles the updates on what the agent knows and the BT handles if and how it reacts to the information.
Chicken or Egg?
Damián talked about how the KR module and the decision module does, indeed, need to be built in concert since information and decisions are mutually dependent. However, he talked about how the iterative process is inherently a “needs-based” design. That is, he only would write the KR modules necessary to feed the decisions that you are going to be using the information for. This is, of course, iterative design at its very core (and how I have always preferred to work anyway). While you might first identify a decision that needs to be coded, you need to then put much of that on hold until you have put together all of the KR implementation that will feed the decision. If you then add future decision processes that use that same blackboard, more power to you. (None of this trumps the idea that you need to plan ahead so that you don’t end up with a mish-mash of stuff.)
As mentioned before, what you put into the KR blackboard is very dependent on the game design. It goes beyond just what knowledge you are storing, however. Damián specifically mentioned that he tries to put as much “smarts” into the KR level as possible. This has the effect of lifting that burden from the decision process, of course.
Are There Exceptions?
Alex asked the question if there would be cases that a behavior engine (such as the BT) would directly access something in the game data rather than looking it up in the KR/blackboard level. Damián cautioned that while you could make the case for doing that occasionally, you would really have to have a good reason to do so. Alex’s example was LOS checks which, unfortunately, is also the wrong time to step outside of the blackboard since LOS checks are such a bottleneck. Damián’s emphasis was that these sorts of exceptions step outside the “smarts” of the KR system… in this case how the KR was spreading out the LOS checks to avoid spikes.
Another example was pathfinding. He said a developer might be tempted to write a behavior that kicks off its own pathfind routine. That’s generally a bad idea for the same bottleneck reasons.
More than Information
I really liked Damián’s exposition on one example of how Halo used more than simple LOS checks. He explained the concept of “visibility” as defined in Halo where the algorithm that fed the blackboard took into account ideas such as distance, the agent’s perceptual abilities, the amount of fog in the space at any time. This was so much more than a LOS check. All the behaviors in the BT then could use “visibility” as a decision-making criteria. I haven’t seen a copy of the Halo 3 BT, but I can imagine that there were many different nodes that used visibility as an input. It sure is nice to do all of this (including the expensive LOS checks) one time per n frames and simply store it for later use as needed. Again, this is very similar to what I espouse in my book and in utility-based reasoners in general.
Dividing up the KR
He described something interesting about how you could have a manager that assigns how many LOS checks each AI gets and then, once the AI knows how many it will get, the AI then prioritizes its potential uses and divvies them up according to its own needs. Rather than having one manager get requests of priorities from all the AIs at once, the first cut would be to simply give each of them a few (which could also involve some interesting prioritization) and then let them decide what to do with the ones they get. I thought that was a very novel way of doing things.
What Does it Look Like?
In response to a question about what does the blackboard data structure look like, Damián acknowledged that people think about blackboards in two different ways. One is just based on a place to scribble shared data of some sort. The other, more formal notion is based on the idea of key/value pairs. He prefers this method because you can do easy logging, etc. For more high-performance stuff (e.g. FPS) there really isn’t much of a need for key/value pairs so there may be more efficient methods such as looking up the information in a struct.
He went on to point out that the size and speed trade-off is likely one of the more important considerations. If an agent at any one time may only care about 5-10 pieces of data, why set aside a whole 500-item struct in memory? Also, key/value pairs and hash tables can’t necessarily be more expressive than a hard-coded struct. I would tend to agree with this. So much of what the data says is in what it is associated with (i.e. the other elements of the struct) and the code around it.
In Halo, they were on the hard-coded side of things because there wasn’t too much data that they needed to store and access. In general, the KR of what you need to access will tend to stabilize long before the behavior.
He also explained the typical genesis of a new KR routine. Often, it happens though refactoring after you find yourself using a particular algorithm in many locations. If this happens, it can often be abstracted into the KR layer. This is the same thing I have found in my own work.
One caveat he added was extending key/value pairs with a confidence rating in case you wanted to do more probabilistic computations. You could iterate over the information, for example, and apply decay rates. Of course, you could also do that in a hard-coded struct. I was thinking this before he said it. To me, adding the manager to deal with the semantics of key/value/confidence sets might introduce more trouble than it is worth. Why not put together a vector of structs that process the same information? To me, this goes to a point of how you can divide your KR into smaller, specifically-functional chunks.
Intelligent Blackboards
An interesting question led to something that I feel more at home with. Someone asked about blackboards collecting the info, processing some info, and writing that info back to the blackboard to be read by the agent and/or other blackboard processors. Damián agreed that a modular system where multiple small reasoners could certainly be touching the same data store… not only from a read standpoint, but a write one as well. This is very intuitive to me and, in a way, is some of the things that I am doing in Airline Traffic Manager (sorry, no details at this time).
Damián confessed that his search demo from the 2009 AI Summit did exactly this. The process that updated the occupancy map was a module hanging off the blackboard. The blackboard itself was solely concerned with grabbing the data of what was seen and unseen. The reasoner processed this data and wrote it back to the blackboard in the form of the probabilistic mapping on those areas. The agent, of course, looked at that mapping and selected it’s next search location accordingly. (BTW, influence mapping of all kinds is a great use for this method of updating information.)
Damián summed up that the overall goal of the blackboard (and, in my opinion KR in general) is that of “meta-representation”. Not that data exists but what that data really means. What it means is entirely dependent on context. The blackboard simply stores these representations in a contextually significant way so that they can be accessed by agents and tools that need to use and respond to that information.
What This Means to Me
I really find this approach important – and am startled that I started using many of these concepts on my own without knowing what they were. One of the reasons that I very much support this work, however, is because it is key to something that I have unwittingly become a part of.
In his article entitled Predictions in Retrospect, Trends, Key Moments and Controversies of 2009! Alex said the following:

Utility-based architectures describe a whole decision making system that chooses actions based on individual floating-point numbers that indicate value. (At least that’s as close to an official definition I can come up with.) Utility in itself isn’t new, and you’ll no doubt remember using it as a voting or scoring system for specific problems like threat selection, target selection, etc. What’s new in 2009 for is:

1. There’s now an agreed-upon name for this architecture: utility-based, which is much more reflective of how it works. Previous names, such as “Goal-Based Architectures” that Kevin Dill used were particularly overloaded already.

2. A group of developers advocate building entire architectures around utility, and not only sprinkling these old-school scoring-systems around your AI as you need them.

The second point is probably the most controversial. That said, there are entire middleware engines, such as motivational graphs which have been effective in military training programs, and Spir.Ops’ drive-based engine applied in other virtual simulations. The discussion about applicability to games is worth another article in itself, and the debate will certainly continue into 2010!

It’s obvious to many people that I am one of the members of that “group of developers” who “advocate building entire architectures around utility”. After all, my book dealt significantly with utility modeling. I am also playing the role of the “utility zealot” in a panel at the 2010 GDC AI Summit specifically geared toward helping people decide what architecture is the best for any given job.

While utility modeling has been often used as a way of helping to sculpt and prioritize decisions in other architectures such as FSMs, BTs, or the edge weights in planners, many people (like Alex) are skeptical of building entire reasoner-based architectures out of them. What Damian explained in this interview is a major part of this solution. Much of the utility-based processing can be done in the KR/blackboard level — even through mutli-threading — to lighten the load on the decision structure. As more of the reasoning gets moved into the KR by simply prepping the numbers (such as how “visibility” represented a combination of factors), less has to happen as part of the decision structure itself.
Kevin Dill and I will be speaking more about this in our lecture at the AI Summit, Improving AI Decision Modeling Through Utility Theory. I hope to be writing more about it in the future as well. After all, according to Alex, it is one of the emerging trends of 2010!

Damian on Halo at the Develop Conference

Sunday, August 10th, 2008

Damian Isla of Bungie spoke at the recent Develop conference in the UK. He covered a lot of the history of Halo and some of the design decisions that were made in the franchise. Here’s a story from Gamasutra that covers a lot of good stuff.

Specifically, there’s a couple of things I want to touch on.

Halo’s designers wanted the title’s gameplay to explore mankind’s “primal games” such as hide and seek, tag, and king of the hill, and the game’s encounters were created with them in mind.

“It’s evolution that taught us these primal games,” said Isla. “They’re the ones that are played with our reptilian brains. The idea was for the AI [to] play them back with you.”

That’s kind of interesting from a design standpoint. I guarantee that no one is sitting there thinking “hey, this is like King of the Hill” but they all recognize the concept on a subconcious level.

Isla pointed out that the importance of territory in Halo’s encounter design is closely connected to the recharging shield mechanic that has appeared since the original game.

“Part of that recipe demands that at some point you have a safe zone,” he explained. “In a sense we needed to make the AI territorial. Once you have this idea, you have to think about the problem of encounter progression as the player expands their safe zone. That itself is a pretty fun process. It gives the player a sense of progress, and is extremely plannable.

This makes a heckuva lot more sense than the “arena + safe corridor + arena…” model. What Halo did was break it up theoretically rather than physically (i.e. with walls). However, there still was the knowledge that the dudes – while still in their territory – were still going to try to take pot shots at you. You could take cover and they weren’t necessarily going to come get you, but it wasn’t completely safe.

Isla made special mention of AI misperception — “the most interesting form” of good AI mistakes. If the player moves stealthily, the AI will assume the player is still sitting where the AI last knew him to be.
“Each AI has an internal model of each target, and that model can be wrong,” Isla summarized. “This allows the AI to be surprised by you, and this is very fun.”

Amen, brother! This is something that I love seeing. I remember reading some of Damian’s papers in the AI Wisdom series on exactly this concept of unknown location and search. Good stuff, man!

Still, Isla stressed, enemies shouldn’t be dumb. “It’s more fun to kill an enemy that’s cunning, formidable, and tactical,” he said, pointing out that that goal is not just an AI problem but also related to convincing animation and game fiction.

Dude… have I told you I loved you? I’m so sick of the mantra of “AI shouldn’t be smart, it should be fun!” As if those two are mutually exclusive of each other.

“In Halo 2, if an AI tips over his vehicle, he walks off and forgets completely he was ever in one,” said Isla. “In Halo 3, if he tips it, he remains in its vicinity fighting until there is a point where he can right it again.”

According to Isla, the latter approach is “the way things should be going” — as he puts it, “behavior should be a very thin layer on top of a world of concepts.”

I would argue that behavior is more than a thin layer. Otherwise, I agree. Which really brings the concept of knowledge representation to the forefront. Not just world representation (e.g. geometry), but a general concept of how agents perceive and conceptualize things (i.e. psychology). Again, I’ve read some of Damian’s papers on the subject. To me, he is someone who “gets it”.

Add to Google Reader or Homepage

Content 2002-2018 by Intrinsic Algorithm L.L.C.