IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘utility modeling’

Getting More Behavior out of Numbers (GDMag Article)

Friday, December 2nd, 2011

(This column originally appeared in the January 2011 edition of Game Developer Magazine.)

We have long been used to numbers in games. Thousands of years ago, when people first started playing games, the simple act of keeping score was dealt with in terms of numbers. Even before games, when people had to simply scratch by for food, numbers were an integral part. From how many rabbits the hunter killed to how many sling stones he had left in his pouch, numbers have been a part of competitive activity for all of history.

“He’s only mostly dead.”

Coming back to the present, numbers are everywhere for the game player. Some are concrete values that have analogs in real life: How much ammo do we have? How many resources will this take to build? What is the range of that weapon? Some are a little more nebulous—if not contrived altogether: What level am I? What condition are my gloves in? What armor bonus does this magic ring afford? And, although the medical community might be a bit startled by the simplicity, games often parade a single number in front of us to tell us how much “life” we have. (“He’s only mostly dead.”) Suffice to say that we have educated our gaming clientele to forgive this intentional shredding of the coveted suspension of disbelief. Even though some games have attempted to obscure this relationship by removing some of the observable references to this numeric fixation, gamers still recognize that the numbers are still there, churning away behind the Wizard’s curtain.

Numbers in Games

As programmers, our fixation with numbers is not coincidental. After all, along with logic, mathematics is the language of our medium. Computers excel in their capacity to crunch all these numbers. They’re in their element, so to speak. This capacity is a primary reason that the pen-and-paper RPGs of the 70s and 80s computers so energetically made the transition to the computer RPGs of the 80s and beyond. Resolving combat in Ultima (1980) and Wizardry (1981) was far swifter that shuffling through charts, tables, and scribbles on scratch paper in D&D.

So numbers in games aren’t going away any time soon—whether in overt use or under the hood. The interesting, and sometimes even disconcerting thing, is that they aren’t used more often. Even with all of the advances and slick architectures that artificial intelligence programmers use, we too often fall back on the most simple of logical constructs to make our decision code. The most obvious example is the venerable if/then statement. Testing for the existence of a criterion is one of the core building blocks of computational logic. “If I see the player, attack the player.” Certainly, this sort of binary simplicity has its place. Left to itself, however, it can fall woefully short of even being adequate.

The answer lies not in the existence of numbers in our world, but what those numbers represent and, ultimately, how we use them.

We could extend the above example by putting a number in the equation such as “If the player is < 30 [units of distance] away from me, attack him.” But really what have we gained? We are still testing for the existence of a criterion – albeit one that is defined elsewhere in the statement or in the program. After all, “see the player” and “player < 30” are simply functions. There is still a razor edge separating the two potential states of “idle” and “attack”. All subtlety is lost.

So how might we do things differently? The answer lies not in the existence of numbers in our world, but what those numbers represent and, ultimately, how we use them.

Looking Inward

Stop for a moment and do a self-inventory. Right now, as you sit reading this column, are you hungry? Sure, for the sake of simplicity, you may answer “yes” or “no”. However, there is usually more to it than that. When my daughter was younger, she tended to cease using “hungry” when she was no longer empty. (This usually meant that she ate two or three bites only to come back wanting more about 20 minutes later.) I, on the other hand, could easily see myself defining “hungry” as “no longer full”. My wife has the penchant for answering somewhat cryptically, “I could be.” (This is usually less convenient than it sounds.)

All of this makes sense to us on an intuitive level. “Hunger” is a continuum. We don’t just transition from “completely full” to “famished”—it is a gradient descent. What we do with that information may change, however, depending on where we are in that scale. For example, we can make judgment decisions such as “I’m not hungry enough to bother eating right now… I want to finish writing this column.” We can also make comparative judgments such as “I’m a little hungry, but not as much as I am tired.” We can even go so far as use this information to make estimates on our future state: “I’m only a little hungry now, but if I don’t eat before I get on this flight, my abdomen will implode somewhere over Wyoming. Maybe I better grab a snack while I can.”

The subtlety of the differences in value seems to be lost on game characters.

Compare this to how the AI for game characters is often written. The subtlety of the differences in value seems to be lost on them. Soldiers may only reload when they are completely out of ammunition in their gun despite being in the proverbial calm before the storm. Sidekicks may elect to waste a large, valuable health kit on something that amounts to a cosmetically unfortunate skin abrasion. The coded rules that would guide the behaviors above are easy for us to infer:

if (MyAmmo == 0)
{
Reload;
}

if (MyHealth < 100)
{
UseHealthKit;
}

Certainly we could have changed the threshold for reloading to MyAmmo <= 5, but that only kicks the can down the road a bit. We could just have easily found our agent in a situation where he had 6 remaining bullets and, to co-opt a movie title, all was quiet on the western front. Dude… seriously—you’re not doing anything else right now, might as well shove some bullets into that gun. However, an agent built to only pay homage to the Single Guiding Rule of Reloading would stubbornly wait until he had 5 before reaching for his ammo belt.

Additionally, there are other times when a rule like the above could backfire (so to speak) with the agent reloading too soon. If you are faced with one final enemy who needs one final shot to be dispatched, you don’t automatically reach for your reload when you have 5 bullets. You finish off your aggressor so as to get some peace and quiet for a change.

Very rarely do we humans, make a decision based solely on a single criteria.

Needless to say, these are extraordinarily simplistic examples, and yet most of us have seen behavior similar to this in games. The fault doesn’t rest in the lack of information—as we discussed, often the information that we need is already in the game engine. The problem is that AI developers don’t leverage this information in creative ways that are more indicative of the way real people make decisions. Very rarely do we humans, make a decision based solely on a single criteria. As reasonable facsimiles of the hypothetical Homo economicus, we are wired to compare and contrast the inputs from our environment in a complex dance of multivariate assessments leading us to conclusions and, ultimately, to the decisions we make. The trick, then, is to endow our AI creations with the ability to make these comparisons of relative merit on their own.

Leveling the Field

So how do we do this? The first step is to homogenize our data in such a way as to make comparisons not only possible, but simple. Even when dealing with concrete numbers, it is difficult to align disparate scales.

Consider a sample agent that may have maximums of 138 health points and 40 shots in a fully-loaded gun. If at a particular moment had 51 health and 23 bullets in the gun, we wouldn’t necessarily know at first glance which of the two conditions is more dire. Most of us would instinctively convert this information to a percentage—even at a simple level. E.g. “He has less than half health but more than half ammo.” Therein lays our first solution… normalization of data.

My gentle readers should be familiar with the term normalization in principle, if not the exact usage in this case. Put simply, it is restating data as a percent—a value from 0 to 1. In the case above, our agent’s health was 0.369 and his ammo status 0.575. Not only does viewing the data this way allow for more direct comparisons—e.g. 0.575 > 0.369—but it has the built-in flexibility to handle changing conditions. If our agent levels up, for example and now has 147 health points, we do not have to take this change into consideration into our comparison formula. Our 51 health above is now 0.347 (51 ÷ 147) rather than 0.369. We have detached the comparison code from any tweaking we do with the actual construction of the values themselves.

But What Does It Mean?

Value expresses a concrete number. Utility expresses a concept.

Normalization, however, only sets the state for the actual fun stuff. Simply comparing percentages between statuses like “health” and “ammo” usually isn’t sufficient to determine the relative importance of their values. For example, I posit that being at 1% health is measurably more urgent than being down to 1% ammo. Enter the concept of utility.

Utility is generally a different measure than simply value. Value, such as our normalized examples above, expresses a concrete number. Utility, on the other hand, expresses a concept. In this case, the concept we are concerned with is “urgency”. While it is related to the concrete values of health and ammo, urgency is its own animal.

“What does this value mean to me?”

The easiest way of doing this is by creating a “response curve”. Think of passing the actual numbers through a filter of “what does this value mean to me?” That is what converting value to utility is like. This filter is usually some sort of formula that we use to massage the raw data. Unlike a lookup table of ranges (such as “ammo ≤ 5”), we have the benefit of continuous conversion of data. We will see how this benefits us later.

The selection of the formula needs to take into consideration specific contour of the translation from value to utility. There are innumerable functions that we can use, but they are all built out of a few simple building blocks. Each of these blocks can be stretched and squished in and of themselves, and combining them together results in myriad combinations.

The first filter that we can run our numbers through is simply a linear conversion. (For these examples, I will simply use the standard x and y axis. I’ll occasionally through in an example of what they could represent.) Consider the formula:

y = 0.8x + 2

This results in a line running from our co-maximum values of (1.0, 1.0) and arrives at y = 0 when x = .2. (See Figure 1.) Put another way, we want a steady descent in our utility (y) at a somewhat quicker rate than the decent of the actual value (x). We could have done something similar by changing the formula to:

y = 0.8x

As this point, the line extends from (1.0, 0.8) to (0, 0).

Figure 1

Obviously changing the slope of the line—in this case, 0.8—would change the rate that the utility changes along with the value (x). If we were to change it to 1.2, for example, the rate of descent would increase significantly. (See Figure 2.)

y – 1.2x − .2

It’s worth noting here that these formulas are best served by being combined with a clamping function that ensures that 0.0 ≤ y ≤ 1.0. When we take that into consideration, we have another feature to identify here: when x < 0.2, y is always equal to 0.

On the other hand, consider the similar formula:

y = 1.2x

This exhibits the same effect with the exception that now the “no effect” zone is when x > 0.8. That is, the utility doesn’t start changing until our value < 0.8.

These effects are useful for expressing the situations where we simply do not care about changes in the utility at that point.

Figure 2

Enter the Exponent
The simple formulas above merely set the stage for more advanced manipulations. For example, imagine a scenario where the meaning of something starts out as “no big deal”, yet becomes important at an increasing rate. The state of the ammo in our gun that we illustrated above makes an excellent example. In this case, the value is simply the number of shots remaining whereas the utility value is our urgency to reload.

 

Analyzing this outside the realm of the math—that is, how would we behave—gives us clues as to how we should approach this. Imagine that our gun is full (let’s assume 40 shots for convenience)… and we fire a single shot. All other things being equal, we aren’t likely to get too twitchy about reloading. However, firing the last shot in our gun is pretty alarming. After all, even having 1 shot left was a heckuva lot better than having none at all. At this point, it is helpful to start from those two endpoints and move toward the center. How would we feel about having 35 shots compared to 5? 30 compared to 10? Eventually, we will start to see that we only really become concerned with reloading when we our ammo drops gets down to around 20 shots—at that below that, things get urgent very quickly!

In a simple manner, this can be represented by the following formula:

y = (x − 1)2

As we use up the ammo in our gun (x), there is still an increase in the utility of reloading, but the rate that the utility increases is accelerating. (See Figure 3.) This is even more apparent when we change the exponent to higher values such as 3 or 4. This serves to deepen the curve significantly. Note that a version of the formula with odd exponents would require an absolute value function so as to avoid negative values.

Figure 3

Another quick note about manipulating these formulas. We could turn the above curves “upside down” by arranging it as follows:

y = (1 − x)2

Looking at the chart (Figure 4) shows that this version provides a significantly different behavior—an agent who has a very low tolerance for having an empty gun, for example!

Figure 4

By manipulating how the function is arranged, we can achieve many different arrangements to suit our needs. We can shift the function on either axis much as we did the linear equations above, for example. (See Figure 5.)

Figure 5

We can specify where we want the maximum utility to occur—it doesn’t have to be at either end of the scale. For example, we might want to express a utility for the optimal distance to be away from an enemy based on our weapon choice. (See Figure 6.)

y = 2(1 − |(x − 0.3)|2)

Figure 6

Soft Thresholds
While we can certainly get a lot of mileage out of simple linear and exponential equations, one final class of formulas is very useful. Sigmoid functions, particularly the logistic function, can be used to define “soft thresholds” between values. In fact, logistic functions are often used as activation functions in neural networks. Their use here, however, is much less esoteric.

The base logistic function is:

y = 1 / (1 + e-x)

While the base of the natural logarithm, e, is conspicuous in the denominator of the fraction, it is really optional. We can certainly use the approximation of 2.718 in that space, 2, 3, or any other number. In fact, by changing the value for e, we can achieve a variety of different slopes to the center portion of the resulting curve. As stated, however, the formula graphs out as shown in Figure 7.

Figure 7

Notice that, unfortunately, the graph’s natural range is not 0–1 as with our other examples. In fact, the range of the graph is infinite in that it asymptotically approaches both y=0 and y=1. We can apply some shifting to get it to fit the 0–1 range, however, so that we can use it with normalized values of x. We can also change the area of the graph where the threshold occurs by changing what we are adding to the exponent.

y = 1 / (1 + e–(10x – 5))

Comparing and Contrasting

We can line up dozens — or even hundreds — of utilities for various feelings or potential actions.

The end result of all of this is that we can create very sophisticated response curves that translate our raw values into meaningful utility values. Also, because these end products are normalized, we can now easily compare and contrast them with other results. Going back to the examples I cited early on, we can decide how hungry we are in relation to other feelings such as tired (or too busy finishing a last-minute column for a magazine). In fact, we can line up dozens—or even hundreds—of utilities for various feelings or potential actions and select from among them using techniques as simple as “pick the highest” to seeding weight-based randoms.

Compare this to what we would have to do were we not to use the normalized utility values. In our hungry/tired/busy example, we normally would have had to construct a multi-part condition to define each portion of our decision. For example:

If ( (Hungry > 5) && (Tired < 3) && (Busy < 7) ) then
{
Eat();
}

If ( (Hungry < 4) && (Tired > 6) && (Busy < 3) )then
{
Sleep();
}

If (…

Ya know what? Never mind…

Even if the above values were normalized (i.e. between 0 and 1), the complexity explosion in simply comparing the different possible ranges and mapping them to the appropriate outcome would get out of hand quickly. And that’s just with 3 inputs and 3 outputs! By converting from value to utility, we massage what the data “means to us” inside each response curve. We now can feel comfortable that a direct comparison of the utilities will yield which item is truly the most important to us.

The system is extensible to grand lengths as well. If we want to include a new piece of information or a new action to take into account, we simply need to add it to a list. Because all the potential actions scored and sorted by their relative benefit, we will automatically take newcomers into stride without much (if any) adjustment to the existing items.

The Sims is an excellent example of how complex utility-based functions can be used.

If calculating and measuring all of these feelings and desires is starting to sound a lot like The Sims, it is not a coincidence. The Sims is an excellent (but not the only) example of how complex utility-based functions can be used to simulate fairly reasonable, context-dependent, decision processes in agents. Richard Evans has spoken numerous times at GDC on this very subject. I wholeheartedly recommend reading his papers and viewing his slides on the subject.

The uses of these methods aren’t limited to that genre, however. Strategy games, in particular, lend themselves to more nuanced calculation. Even in modern shooters and RPGs, agents are expected to make increasingly more believable decisions in environments that contain significantly more information. Our AI no longer has the luxury of simply leaning on “if I see the player, shoot him!” as its sole guideline and building static rulesets that address all the possible permutations of world state gets brittle at an exponential pace.

However, as I’ve illustrated (ever so briefly) the inclusion of some very simple techniques lets us step away from these complicated, often contrived, and sometimes even contradictory rulesets. It also allows us, as AI designers, to think in familiar terms of “how much”—the same terms that we often use when we think of our own (human) states. The numbers we need are there already. The magic is in how we use them.

You can find all of the above and more in my book, Behavioral Mathematics for Game AI.

Damián Isla Interview on Blackboard Arch.

Thursday, February 11th, 2010
In preparation for the GDC AI Summit (and the inevitable stream of dinner conversations that will be associated with GDC), I have tried to catch up on playing some games and also getting current with papers and interviews. On the later point, Alex Champandard at AIGameDev keeps me hopping. It seems he is almost constantly interviewing industry people on the latest and greatest stuff that is happening in the game AI realm.
A few weeks back, he interviewed Damián Isla about blackboard architectures and knowledge representation. Seeing as I always learn something from Damián, I figured that interview needed to go to the top of the list. Here’s some of my notes as I listen through the interview.
Make it Bigger
Damián’s first point to Alex was that a major benefit of a blackboard architecture was scalability. That is, putting together a rule-based system that performs a single decision is simple… the hard part is when you have 10,000 of those rules going on at the same time.
In a similar vein, he said it was about shared computation. Many of the decisions that are made are used with the same information. Blackboards, to Damian, are an architectural feature that can decouple information gathering and storage from the decisions that are made with that information. If one part of a decision needs to know about a computed value regarding a target, that information could potentially be used by another portion of the decision engine entirely… even for a competing action. By calculating the requisite information once and storing it, the decision algorithms themselves can simply look up what they need.
This is similar to the approach that I espouse in my book. I don’t directly say it, but in a way it is implied. With my approach of compartmentalizing tiers of calculations, many of the individual layers of information that needs to be processed are done so independently of the final decision. The information needs only be collected and tabulated one time, however. The various decision models can simply look up what they need. In a multi-agent system, different agents can use much of the same information as well. While distance to a target may be personal, the threat level of a target (based on weaponry, health, cover, etc.) may be the same for all the enemies. That threat level can be calculated once and saved for everyone to use.
Back to Damián…
He mentioned that at the Media Lab at MIT they used blackboards for many things like logging, testing, and debugging. That is something I haven’t necessarily thought of. They also had observer systems running over a network. They shared parts of the blackboard info so that the fully running games were doing everything but thinking.
Alex reiterated that a blackboard is more of an architectural feature rather than a decision process. Damián confirmed that the history of blackboards involved planners but that we are now using them inside reactive systems as well.
Blackboards vs. Lots of Static Variables
At Alex’s prompting, Damián suggested that the blackboard is far more dynamic than having just many billions of values specified in a character.h file. In fact, he likened it much more to having one unified interface to all of your game data beyond just that of the character in question.
Do all agents need their own unique blackboard?
I like the fact that Damián’s initial answer was a refrain that I repeated throughout my book… “it totally depends on the game you’re making.” Unfortunately, that is a major stumbling block to answering any architectural or procedural question.
He went on to say something similar to what I mention above… that you have to mix them. There are some pieces of information that individuals track and others that are available to the group as a whole. Visibility of targets, for example, is individual. Goals for a squad, on the other hand, is something that can be shared and referenced.
Direct Query vs. Blackboard?
The most important point Damián made here had to do with “redundancy”. If you have a situation that you can guarantee you only need something once, then accessing it directly from a behavior is fine. If multiple behaviors might use the same data, access it once and store it on the blackboard.
The answer to avoiding the redundancy issue was “abstraction”. That’s what a blackboard represents. It gives that intermediate layer of aggregation and storage. He actually referred to it as “sub-contracting” the information gathering out to the blackboard system. The difference is that the blackboard isn’t simply passing on the request for information, it is actually storing the information as a data cache so that it doesn’t need to be re-accessed.
One very important point that he made was that there was some intelligence to the blackboard in deciding how often to ask for updates from the world. This is a huge advantage in that the process of information gathering for decisions can be one of the major bottlenecks in the AI process. LOS checks, for example, are horribly time-consuming. If your system must ask for all the LOS checks and other information every time a BT is run (or multiple times in the same BT), there can be a significant drain on the system. However, if you are able to time-slice the information gathering in the background, the only thing the BT needs to do as access what is on the blackboard at that moment.
Incidentally, this is a wonderful way of implementing threading in your games. If your information gathering can continue on its own timeline and the behaviors need only grab the information that is current on the blackboard, those two processes can be on separate threads with only the occasional lock as the blackboard is being updated with new info.
This goes back to my interview with Alex about a year ago where he asked about the scalability of the techniques I write about in my book. My point was that you don’t have to access all the information every time. By separating the two processes out with this abstraction layer as the hand-off point, it keeps the actual decision system from getting bogged down.
In order to also help facilitate this, Damián spoke of the way that the information gathering can be prioritized. Using LOS checks as his example, he talked about how the Halo engine would update LOS checks on active, armed, enemies every 2 or 3 frames, but update LOS checks on “interesting but not urgent” things like dead bodies every 50 frames. Sure, it is nice for the AI to react to coming around a corner and seeing the body, but we don’t need to constantly check for it.
Compare this to a BT where a node “react to dead body” would be checked along with everything else or (with a different design) only after all combat has ceased and the BT falls through the combat nodes to the “react…” one. At that point, the BT is deciding how often to check simply by its design. In the blackboard architecture, the blackboard handles the updates on what the agent knows and the BT handles if and how it reacts to the information.
Chicken or Egg?
Damián talked about how the KR module and the decision module does, indeed, need to be built in concert since information and decisions are mutually dependent. However, he talked about how the iterative process is inherently a “needs-based” design. That is, he only would write the KR modules necessary to feed the decisions that you are going to be using the information for. This is, of course, iterative design at its very core (and how I have always preferred to work anyway). While you might first identify a decision that needs to be coded, you need to then put much of that on hold until you have put together all of the KR implementation that will feed the decision. If you then add future decision processes that use that same blackboard, more power to you. (None of this trumps the idea that you need to plan ahead so that you don’t end up with a mish-mash of stuff.)
As mentioned before, what you put into the KR blackboard is very dependent on the game design. It goes beyond just what knowledge you are storing, however. Damián specifically mentioned that he tries to put as much “smarts” into the KR level as possible. This has the effect of lifting that burden from the decision process, of course.
Are There Exceptions?
Alex asked the question if there would be cases that a behavior engine (such as the BT) would directly access something in the game data rather than looking it up in the KR/blackboard level. Damián cautioned that while you could make the case for doing that occasionally, you would really have to have a good reason to do so. Alex’s example was LOS checks which, unfortunately, is also the wrong time to step outside of the blackboard since LOS checks are such a bottleneck. Damián’s emphasis was that these sorts of exceptions step outside the “smarts” of the KR system… in this case how the KR was spreading out the LOS checks to avoid spikes.
Another example was pathfinding. He said a developer might be tempted to write a behavior that kicks off its own pathfind routine. That’s generally a bad idea for the same bottleneck reasons.
More than Information
I really liked Damián’s exposition on one example of how Halo used more than simple LOS checks. He explained the concept of “visibility” as defined in Halo where the algorithm that fed the blackboard took into account ideas such as distance, the agent’s perceptual abilities, the amount of fog in the space at any time. This was so much more than a LOS check. All the behaviors in the BT then could use “visibility” as a decision-making criteria. I haven’t seen a copy of the Halo 3 BT, but I can imagine that there were many different nodes that used visibility as an input. It sure is nice to do all of this (including the expensive LOS checks) one time per n frames and simply store it for later use as needed. Again, this is very similar to what I espouse in my book and in utility-based reasoners in general.
Dividing up the KR
He described something interesting about how you could have a manager that assigns how many LOS checks each AI gets and then, once the AI knows how many it will get, the AI then prioritizes its potential uses and divvies them up according to its own needs. Rather than having one manager get requests of priorities from all the AIs at once, the first cut would be to simply give each of them a few (which could also involve some interesting prioritization) and then let them decide what to do with the ones they get. I thought that was a very novel way of doing things.
What Does it Look Like?
In response to a question about what does the blackboard data structure look like, Damián acknowledged that people think about blackboards in two different ways. One is just based on a place to scribble shared data of some sort. The other, more formal notion is based on the idea of key/value pairs. He prefers this method because you can do easy logging, etc. For more high-performance stuff (e.g. FPS) there really isn’t much of a need for key/value pairs so there may be more efficient methods such as looking up the information in a struct.
He went on to point out that the size and speed trade-off is likely one of the more important considerations. If an agent at any one time may only care about 5-10 pieces of data, why set aside a whole 500-item struct in memory? Also, key/value pairs and hash tables can’t necessarily be more expressive than a hard-coded struct. I would tend to agree with this. So much of what the data says is in what it is associated with (i.e. the other elements of the struct) and the code around it.
In Halo, they were on the hard-coded side of things because there wasn’t too much data that they needed to store and access. In general, the KR of what you need to access will tend to stabilize long before the behavior.
He also explained the typical genesis of a new KR routine. Often, it happens though refactoring after you find yourself using a particular algorithm in many locations. If this happens, it can often be abstracted into the KR layer. This is the same thing I have found in my own work.
One caveat he added was extending key/value pairs with a confidence rating in case you wanted to do more probabilistic computations. You could iterate over the information, for example, and apply decay rates. Of course, you could also do that in a hard-coded struct. I was thinking this before he said it. To me, adding the manager to deal with the semantics of key/value/confidence sets might introduce more trouble than it is worth. Why not put together a vector of structs that process the same information? To me, this goes to a point of how you can divide your KR into smaller, specifically-functional chunks.
Intelligent Blackboards
An interesting question led to something that I feel more at home with. Someone asked about blackboards collecting the info, processing some info, and writing that info back to the blackboard to be read by the agent and/or other blackboard processors. Damián agreed that a modular system where multiple small reasoners could certainly be touching the same data store… not only from a read standpoint, but a write one as well. This is very intuitive to me and, in a way, is some of the things that I am doing in Airline Traffic Manager (sorry, no details at this time).
Damián confessed that his search demo from the 2009 AI Summit did exactly this. The process that updated the occupancy map was a module hanging off the blackboard. The blackboard itself was solely concerned with grabbing the data of what was seen and unseen. The reasoner processed this data and wrote it back to the blackboard in the form of the probabilistic mapping on those areas. The agent, of course, looked at that mapping and selected it’s next search location accordingly. (BTW, influence mapping of all kinds is a great use for this method of updating information.)
Meta-Representation
Damián summed up that the overall goal of the blackboard (and, in my opinion KR in general) is that of “meta-representation”. Not that data exists but what that data really means. What it means is entirely dependent on context. The blackboard simply stores these representations in a contextually significant way so that they can be accessed by agents and tools that need to use and respond to that information.
What This Means to Me
I really find this approach important – and am startled that I started using many of these concepts on my own without knowing what they were. One of the reasons that I very much support this work, however, is because it is key to something that I have unwittingly become a part of.
In his article entitled Predictions in Retrospect, Trends, Key Moments and Controversies of 2009! Alex said the following:
Utility-based architectures describe a whole decision making system that chooses actions based on individual floating-point numbers that indicate value. (At least that’s as close to an official definition I can come up with.) Utility in itself isn’t new, and you’ll no doubt remember using it as a voting or scoring system for specific problems like threat selection, target selection, etc. What’s new in 2009 for is:

1. There’s now an agreed-upon name for this architecture: utility-based, which is much more reflective of how it works. Previous names, such as “Goal-Based Architectures” that Kevin Dill used were particularly overloaded already.

2. A group of developers advocate building entire architectures around utility, and not only sprinkling these old-school scoring-systems around your AI as you need them.

The second point is probably the most controversial. That said, there are entire middleware engines, such as motivational graphs which have been effective in military training programs, and Spir.Ops’ drive-based engine applied in other virtual simulations. The discussion about applicability to games is worth another article in itself, and the debate will certainly continue into 2010!

It’s obvious to many people that I am one of the members of that “group of developers” who “advocate building entire architectures around utility”. After all, my book dealt significantly with utility modeling. I am also playing the role of the “utility zealot” in a panel at the 2010 GDC AI Summit specifically geared toward helping people decide what architecture is the best for any given job.

While utility modeling has been often used as a way of helping to sculpt and prioritize decisions in other architectures such as FSMs, BTs, or the edge weights in planners, many people (like Alex) are skeptical of building entire reasoner-based architectures out of them. What Damian explained in this interview is a major part of this solution. Much of the utility-based processing can be done in the KR/blackboard level — even through mutli-threading — to lighten the load on the decision structure. As more of the reasoning gets moved into the KR by simply prepping the numbers (such as how “visibility” represented a combination of factors), less has to happen as part of the decision structure itself.
Kevin Dill and I will be speaking more about this in our lecture at the AI Summit, Improving AI Decision Modeling Through Utility Theory. I hope to be writing more about it in the future as well. After all, according to Alex, it is one of the emerging trends of 2010!
Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content 2002-2010 by Intrinsic Algorithm L.L.C.

OGDA