IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘Behavioral Mathematics for Game AI’

Art vs. AI: Why Artists Have It Easier

Friday, August 24th, 2012

The topic of rendering vs. AI came up in a discussion in the AI Game Programmers Guild. A point made in that discussion was that rendering programmers have it easier since they know what it is they are trying to accomplish and have a pretty good handle on the inner workings of what is going on inside the computer to produce that. We, as AI programmers, do not have that luxury since we don’t really have a guide in that respect.

This reminded me of how I had addressed it in the opening chapter of my book. Rather than bludgeon the AIGPG list with this, I figured I would paste it here in the open for everyone to see. What follows is an excerpt from Chapter 1 of Behavioral Mathematics for Game AI.

—–
Going Beyond Looks

For purposes of full disclosure, I have to admit that I have very little talent when it comes to art. I understand some of the concepts – I can draw perspective of blocky objects using a vanishing point, for example. I can even copy an existing drawing to some extent – to this day, I have a fantastic painting of a pig that I made in 7th grade that was copied from a picture in a magazine. However, that about exhausts my talent for the medium.

Looking like a Pig

Despite my dearth of ability to perform that particular discipline, I would still feel secure in making the claim that artists in the game industry have life a bit easier than do AI programmers. After all, they can see what it is that they are supposed to be accomplishing. Before they begin to draw a pig, they can look at a pig first. They can make changes to parts of their pig that are less than accurate – in effect, fine-tuning their pig-replication abilities. They can show their picture of a pig to anyone else that wanders by and ask “what does this look like?” Unless the artist subscribes to a more Picasso-esque approach, the reaction should be “Hey! It’s a pig!” (Unfortunately, my art teacher didn’t buy my claim of being a disciple of Picasso.) People know what a pig looks like. It can be rigidly defined as a collection of expected parts. For example, everyone knows that a pig has four feet. If your pig has five feet, one of which is located in front of an inexplicably present dorsal fin, viewers will be mildly disturbed.

Artists also are comfortable with recreating the environment. A light on one side of an object makes a corresponding shadow on the other. We’ve all seen it; we all know that it would be incorrect otherwise. The ability to perform observation and criticism doesn’t simply lead to the realization that an error needs to be addressed – it often leads to the solution itself. e.g. “Make the floor darker on the side of the object opposite the light.” Even though I lack the ability to necessarily fix it properly, even I as a non-artist can often suggest the solution.

From a technical standpoint, the solutions are often fairly intuitive as well. For example, to make the color of the blue floor darker in the shadows, use a darker color of blue. To make the buildings in the distance look smaller, draw them smaller. Truly, the models of how to accomplish many of the core features of art have been somewhat solved for hundreds of years.

Acting like a Pig

In contrast, game AI provides some challenges in a number of respects. First we can’t just show our AI-enabled pig to someone and say “does this act like a pig?” The answer can’t be rattled off as easily as one regarding the look of a pig. Certainly, there are some obvious failure scenarios such as said pig flying about the room in proverbially unlikely fashion. That should tip some of us off that something is amiss. However, it is far more difficult to state for certain while watching Mr. Swine’s behavior unfold on-screen that it is, indeed, acting the way a proper pig should. There is a layer of abstraction involved that is not easy to translate through.

With the artwork, we see real life and then we see the representation of it. There is an implicit equality there. If what we see in real life doesn’t match what we see in the representation, we can determine that it is wrong. Even if equality is not reached, we are cognizant of a point of diminishing returns. We are accepting of a representation that is “pretty darn close” to what we see in reality.

When watching behavior, however, we have to pass our understanding of that behavior through a filter of judgment. “Is that the correct thing for the pig to do?” In order to answer this question of equality, we would have to have an established belief of what the real-life pig would have been doing in the first place. While we can give generalized answers to that question, none of us can state for certain that every pig will act in a certain way every time that a situation occurs.

Moving beyond pigs to behaviorally more complicated life forms (such as humans – although there may be some exceptions), the solution set gets significantly larger. As that happens, our confidence in what we believe the entity in question “should be” doing slips dramatically. While we may be more comfortable in thinking of possibilities of human behavior than those of a pig (seeing that we are, indeed, human), the very fact that there are so many subtle shades of those behaviors makes it statistically less likely that any one of them is the “right thing” to be doing.

Just as our ability to define what it is they should be doing wanes we are ever more hard-pressed to judge an artificial representation of an entity’s behavior. In Tic-Tac-Toe, it was obvious when the opponent was playing right or wrong – the ramifications were immediately apparent. In Poker, even looking over a player’s shoulder at his cards, it is often difficult to judge what their behavior “should be”. The combination of the possibility space of the game with the range of thought processes of different players makes for a staggering array of choices. The best we can come up with is “that may be a decent choice, but this is what I would do if I were him.” And that statement itself needs to be taken with a grain of salt since we may not be taking the correct – or more correct – approach ourselves.

Making Pigs Act like Pigs

What this means is that AI programmers have it tough. Unlike the artist that can see his subject and gauge the relative accuracy of his work to it, AI programmers don’t necessarily know where they are headed. Certainly, we can have ideas and wishes and goals – especially in the short run. (“I want my pig to eat at this trough.”)  We are also well aware that those can tend to backfire on us. (“Why is my stupid pig eating at that trough when it is on fire?”) However, as the complexity of our world grows, we have to realize that there may not be a goal of perfection such as the goal of photo-realism in art. Behavior is too vague and ephemeral to explain… therefore making it impossible to accurately mimic. Often the best we can do is to embrace methods that give us a good shot at coming close to something that looks reasonable.

But how do we do that without going the route of complete randomness of the Rock-Paper-Scissors player, the monotonous predictability of Tic-Tac-Toe opponent, or the rigid mindlessness of the rule-bound Blackjack dealer? Somehow we have to be able to create the mind of the Poker player. We have to approach the game from the inside of that Poker player’s psyche.

We have to embody that soul with the ability to perceive the world in terms of relevant, not relevant, interesting, dangerous. We have to give him a way to conceptualize more than just “right or wrong” – but rather shades of differentiation. Better. Worse. Not quite as bad as. We have to create for him a translation mechanism to our own thought processes. And it is this last part that is the most difficult… in order to do that, we have to do so in a language that both of us can understand – yet one that is robust enough to convey all that we perceive and ponder. And that language is thankfully one that computers know best – in fact, the only one they know – that of mathematics.

The trick is, how do we convert behavior into mathematics? It isn’t quite as simple as the WYSIWYG model of the artists. (“Make the blue floor in the shadow darker by making the blue darker.”) There is no simple translation from behaviors into numbers and formulas. (For what it’s worth, I already checked Altavista’s™ translation tool, Babel Fish. No help there!) Certainly researchers have been taking notes, conducting surveys and accumulating data for ages. That doesn’t help us to model some of the behaviors that as game developers we find necessary but are so second-nature to us that no researcher has bothered to measure it.

So we are on our own… left to our own devices, as it were. The best we can do is to observe the world around us, make our own notes and conduct our own surveys. Then, using tools to make expression, calculation and selection simpler, we can attempt to create our own interface into our hat-wearing Poker player so that he can proceed to make “interesting choices” as our proxy.

It is my hope that, through this book, I will be able to suggest to you how to observe the world and construct those mathematical models and interfaces for decision making. Will your AI pigs become model swine in all areas of their lives? I don’t know. That’s not entirely up to me. However, that result is hopefully going to be far better than if I were to attempt to paint one for you.

What Real Wolves Can Teach Us about Our AI

Thursday, October 27th, 2011

An article was recently brought to my attention. The first one I looked at was Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.

The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,

Using a computer model, researchers had each virtual “wolf” follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.

The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980′s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.

The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.

We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.

Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.

So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!

AIIDE 2009 – AI Challenges in Sims 3 – Richard Evans

Thursday, October 29th, 2009
This is the rough dump of my notes from Richard Evans’ AIIDE 2009 invited talk on the AI challenges they faced in developing The Sims 3. Some of it was familiar to me as being exactly what he presented as part of our joint lecture at the GDC AI Summit in 2009. Other portions of it were new.

Specifically, I enjoyed seeing more about how they handled some of the LOD options. For example, rather than parsing all the available actions, a sim would decide what lot to go to, then what sim to interact with, and then how to interact. Therefore, the branching factor was significantly more manageable.

Another way they dealt with LOD was in the non-played Sims. Rather than modeling exactly what they were doing when (while off-screen), they made some general assumptions about their need for food, rest, taking a leak, etc. These were modeled as “auto-satisfy” functions. For example, if you met a sim close to dinner time, he would likely be hungry. If you met him a little later, he would present as being full.
Additionally, as you will see below, the entire town had underlying simulation mechanics that balanced how many people were dying and being born, what gender they were (on average), and even where they were moving to and from. They modeled much of this with a very simple geometric interface early on so that they could test their mathematical models. Same with the simple behaviors. He showed video demos of these models in action. This also allowed them to speed up time to ridiculous levels and let the sim run overnight to test for situations that would tip the sim out of balance. Lots of fun!
He also mentioned about how the behavior selection was done. This was important to me in that he showed how they used some of the same techniques that I talk about in my book. Specifically, he uses a utility-based method and selects from the behaviors using weighted randoms of the top n selections. Excellent work, sir!
The following are my raw notes.
AI Challenges in Sims 3
Richard Evans
He mentioned the website dedicated to Alice and Kev. The author simply sat back and watched the Sims do their autonomous behavior and wrote about it.
1. Hierarchical Planning
2. Commodity-Interaction Maps
3. Auto-satisfy curves
4. Story progression
Instead of nesting decisions about which act to perform on which person in which lot, you chose a lot first, then chose a person, then chose an action.
O(P + Q + N) instead of O(P * Q * N)
Data-driven approach so that the venues populate appropriately (e.g. restaurants)
Optimization
If you are full, don’t even consider eating as a possible selection of what to do.
Auto-satisfy curves for LOD. That way you don’t have to simulate the off-screen Sims. Assume that they have eaten at the right times, etc.
Other Sims need to progress through life the same way that your Sim does. Age, marriage, children, career, move, etc. Long-term life-actions are simulated at LOD.

The town has various meta-level desires. (Gender balance so that we don’t have all male or all female.) (Employment rate for the entire town. Some people will be unemployed… peaks at ~80-90%
High-level prototype showing the major life actions (not smaller actions). Simulating the town without simulating the people.
Making Sims looking after themselves

Utility modeling
Highest-scoring
Randomly from the n highest-scoring actions
Randomly using the score distribution as the probably distribution (weighted randoms)
Personality and Traits and Motives
Same that he talked about at AI Summit
Traits -> Actions = massively data-driven system to minimize hard-coded systems.
Kant’s categorical imperatives?!?
Emily Short: “The conversation is an end in itself.”
Take Home Actionable items
Data-drive everything!
Take the teime to make good in-game viz. tools
Prove out all simulation ideas using prototypes as soon as possible.
Richard shows excellent 2-D prototype that runs sims without the realistic world!

Silicon Prairie News Interviews Dave Mark

Tuesday, August 4th, 2009

About a month ago, I was invited to sit down with Adam Templeton of Silicon Prairie News. He wanted to talk to me about Intrinsic Algorithm, my book, the state of game AI, and – more relevant to the purpose of their site – the Omaha Game Developers Association. We had a great chat for about an hour in the library on the campus of the University of Nebraska at Omaha. What resulted was a 20-minute video interview.

I was very pleased to have spoken to SPN about not only my work, but about the possibility of a professional game development community here in Nebraska.
My thanks to Adam for an excellent interview and a pleasant afternoon.

Interview on AIGameDev

Wednesday, April 15th, 2009

Back on April 5th, Alex Champandard of AIGameDev interviewed me for about 90 minutes for the Members portion of his site. Our topic was how to use behavioral mathematics (such as I cover in my book) to improve the bots in Left 4 Dead. We cover a lot of interesting information in the interview. Some of the examples refer to things I covered in my Post-Play’em columns on the AI in the game.

He has it posted in audio and video formats (although with me rocking back and forth in my office chair, I look like I’m autistic!). I seriously advise that you check it out. (You will need to have access to the members area to view it.) If you are already a member of AIGameDev, you can find the interview here:

Gamasutra Article on Intelligent Mistakes

Wednesday, March 18th, 2009

There’s a nice, if incomplete, article on Gamasutra today by Neversoft’s Mick West titled Intelligent Mistakes: How to Incorporate Stupidity Into Your AI Code. It’s not a new subject, certainly, but what caught my eye was the fact that he used the game of Poker as one of his examples.

In the first chapter of my book, “Behavioral Mathematics for Game AI,” I actually use Poker as a sort of “jumping off point” for a discussion on why decision-making in AI is an important part of the game design. I compare it to games such as Tic-Tac-Toe where the only decision is “if you want to not lose, move here,” Rock-Paper-Scissors where (for most people) the decision is essentially random, and Blackjack where your opponent (the dealer) has specific rules that he has to play by (i.e. < 17 hit, > 17 stand).

Poker, on the other hand (<- that's a joke), is interesting because your intelligent decisions have to incorporate the simple fact that the other player(s) is making intelligent decisions as well. It's not enough to look at your own hand and your own odds and make a decision from that. You have to take into account what the other player is doing and attempt to ferret out what he might be thinking. Conversely, your opponent must have the same perceptions of you and what it is you might be thinking and, therefore, holding in your hand.

Thankfully, there is no “perfect solution” that a Poker player can follow due to the imperfect information. However, in other games, there are “best options” that can be selected. If our agents always select the best options, not only do we run the risk of making them too difficult (“Boom! Head shot!) but they also all tend to look exactly the same. After all, put 100 people in the exact same complex situation and there will be plenty of different reactions. In fact, very few of them will choose what might be “the best” option. (I cover this extensively in Chapter 6 of my book, “Rational vs. Irrational Behavior.”

Our computerized AI algorithms, however, excel greatly at determining “the best” something. Whether it be an angle for a snooker shot (the author’s other example), a head shot, or the “shortest path” as determined by A*. After all, that is A*’s selling point… “it is guaranteed to return the shortest path if one exists.” Congratulations… you inhumanly perfect. Therefore, as the author points out in his article, generating intelligent-looking mistakes is a necessary challenge. Thankfully, in later chapters of “Behavioral Mathematics…” I propose a number of solutions to this problem that can be easily implemented.

Anyway, I find it an interesting quandary that we have to approach behavior from both sides. That is, how do we make our AI more intelligentand, how do we make our AI less accurate? Kind of an odd position to be in, isn’t it?

Rebalancing Scrabble (and other games)

Tuesday, March 17th, 2009

It’s sometimes interesting the places you can find nuggets that can be applied to game development. The Wall Street Journal has a regular column called “The Numbers Guy”. I have often been interested or amused at what appears there. After all, having just finished a book entitled, “Behavioral Mathematics for Game AI”, I am obviously sort of a “numbers guy” myself.

Anyway, today’s column was titled Scrabble and Other Games — on Boards, Fields, Courts and Ice — Have Overvalued Points; Vermont Avenue Is a Steal. For the most part, the author is talking about how, with the addition of more words containing z, x, and q to the Scrabble Dictionary, it has thrown the carefully crafted point balance out of whack. Anyone who has played Scrabble has noticed that there are more of the common letters (e, a…) and only one each of the rare letters. Additionally, the point values are significantly higher for them to reward you for the difficulty in finding words to use them in. However, there are now more of those ostensibly rare words than were originally used… making it easier to use those letters… thereby making it easier to seriously cash in on those high value tiles. The contention by some players is that it has thrown the balance of the game off.

Or has it?

There are others that have pointed out that anyone can use those new words. Therefore, the balance isn’t off at all. My follow-up comment would be that the scale of the scoring is different. When I played fairly high-level Scrabble for a while a number of years ago, a good round of 1-on-1 would be up in the 400s. We were using those words such as xi and suq and qat. After all, they are in the Scrabble Dictionary so why not? Now, if we had not been using those words and simply been using… well… simple words. Netting those 400+ points would have been marginally more difficult. The rounds may have been in the 350+ range instead. For both players.

However, this is when we view games as an aggregate. In a single game, reaching into the bag and pulling out the only letter x or the only letter z is, for all intents and purposes, a random event. If you pull out one of those singletons, by definition your opponent cannot. Therefore, the simple (random) act of pulling out a high-value letter gives you a significant advantage.

And that’s the rub… novice players who don’t know those letters look at the z and q and x as a handicap. (“What am I going to do with this damn thing?!?”) Experienced people who fall asleep with the Scrabble Dictionary look at those letters as a fortuitous occurrence… and possibly a clinching one as well. The response is going to be more along the lines of “Ha! I got it and he didn’t!”

So… while one could make the claim that it is only the scale of the scores that has increased, the actual result is that it has put more weight on the random factor of what is otherwise not supposed to be a random-centric game. And that I do have a problem with.

Football and Basketball

In other parts of the article, he mentions how other scores and probabilities have change. Football field goals. Basketball 3-point shots, etc. Looking at the latter, if you typically hit 50% on 2-point shots, then all you need to do is hit 33%+ on 3-point shots to make the decision to always go for them. After all, making 1 in 3 3-pointers is the same as 1 in 2 2-point shots. His comment was that, in college basketball especially where the 3-point range was shorter, many players could do that.

This is an interesting item to note as a game developer. If you were designing a game and deciding how far back to place the 3-point arc, you would specifically want to look at those very statistics. That is, at what distance does the probability of success merit increasing the payoff? If it is too close, most people will shoot from beyond it and it will take away the inside game. If it is too far away, no one will bother and will, instead, try to get as close as possible. It’s a delicate balance.

Never Mind Boardwalk…

Another quick mention in the article is Monopoly. This one amused me. Back in about 1993, I whipped up a monster spreadsheet that determined that the best bet for buying and building was the light blue properties… the 2nd set on the 1st leg. This article confirmed that some dude did the same math and determined the same thing. Big deal. The difference was, I based mine on the probabilities of landing on various squares, various monopolies, etc., the cost to buy, the cost to build, and the income that you would receive. Apparently this other guy used a Monte Carlo method (32 billion random rolls) to come up with his answer. Wimp. Anyone can run a Monte Carlo simulation and get the right answer. However, you learn a lot more about the “why” of the situation by looking at the relationships of the various components. This is a form of multi-attribute utility theory that I cover extensively in my book.

Anyway, many of us think of game balance in the realm of RTS, TBS, or RPG games. However, even games where every unit is the same (e.g. checkers) can be an excercise in mathematical balancing. How much weight do I put on a given option? How important is it? Especially compared to other types of factors that might not be easily relatable? Again, this is something that I address in great detail in my book. It has always been something that is of interest to me and, thankfully, I seem to have a knack for it.

Go buy the book and you can have a knack for it too! :-)

Look Inside "Behavioral Mathematics"

Saturday, March 14th, 2009


Well, I guess my book has finally arrived on Amazon’s loading dock. They have managed to scan it so that it shows the “look inside” logo and stuff. So those of you who would like to peruse the table of contents, the index and the first 5 or 6 pages may do so. Of course, if you really want to have some fun, you can do the whole “surprise me” bit. Keep at it long enough and you could read the entire book… although it might not make quite as much sense all jumbled up like that!

I also like that you can now see the cover art much better. (I did the cover artwork myself.) Incidentally, the faces on the blocks are, from left to right, Blaise Pascal, John von Neumann, and Jeremy Bentham… all of whom are referenced in the book. A big thanks goes out to my colleague, Richard Evans, for his review blurb that appears on the front cover as well. On the back cover is a blurb by Alex Champandard of AIGameDev.com who was nice enough to provide a review as well.

All in all, I’m quite pleased with how the sales have been going. I’ve been averaging anywhere between 5th and 15th on the “Hot New Releases of Game Programming” list and have actually been in the top 5 a few times in the past 6 weeks. I’ve also made it as high as #2 in game programming books overall — although most of the time I’m in the middle of that very large pack. (e.g. 34th right now)

I’m looking forward to meeting many of you at the Charles River Media booth at GDC next week. (Remember, I charge 25 cents for autographs!)

"Behavioral Mathematics" Sales Rank Jump!

Thursday, February 26th, 2009

Alex Champandard over at AIGameDev.com posted a preview of my book last night. He had been waiting for Amazon to get the cover displayed before he did so. Well, AIGameDev has about 3600+ subscribers so it is understandable that a write-up would affect my Amazon sales rank. However, I was not expecting this when I woke up this morning!

As of this hour (9:30AM CST), I jumped up to 16,384 in the overall books category. When you figure that my previous high was 74,604 (after Alex posted a small blurb in his weekly link digest) and that most of my time is spent between 200,000 and 600,000, being around 16k doesn’t suck at all!

More importantly, I jumped up to #14 in Game Programming. I had only appeared on that list (100 books) 4 times in the past 2 weeks, no higher than 57th, and usually only for a few hours. Maybe this time I have some staying power.

I am also #2 in the “Hot New Releases of Game Programming. I had been #3 or 4 a few times, but this is a pretty fluid list. Again, however, it will be nice to see how high this climbs.

I figure the next few days should look pretty good as Alex’s post is only about 12 hours old. And it probably takes a while for 3600 people to get around to their RSS feeds. Either way, I want to thank Alex for his support and the shout-out on the book.

"Behavioral Mathematics" Cover Finally Uploaded!

Wednesday, February 25th, 2009

Well, the cover for my book “Behavioral Mathematics for Game AI” is finally uploaded to Amazon (and I assume other retailers as well). It was odd having that mysterious question mark there for so long. But I suppose when the book is on sale before you have even finished writing it, that’s the price you pay. Anyway, it’s nice to have it show up properly on the page, the best-sellers lists, and the Amazon ads.

Speaking of rankings lists, I have been floating anywhere between 3 and 20 on the “Hot new releases in ‘Game Programming’” list. I’ve even made a few appearances on the top 100 Game Programming books in the past few weeks. Once, I was as high as 57. It doesn’t last long, however. They update those sales figures and rankings every hour so it is very susceptible to fluctuation. I figure that I will do a little better now that I have a pretty cover picture on the sites. (I guess people would be skeptical of a book with no cover.)

For those of you who bounce out to the site, the book description and the bio are still not updated exactly as they are supposed to be. From what I understand, they will changed in the next week or so. They have been pushed out the Amazon, but it takes a while for those sorts of changes to updated. *shrug*

I do want to give a shout out to my colleague, Brian Schwab, who just released his second edition to AI Game Engine Programming in the past few months. I’m looking forward to picking that up soon. He deserves props for being the only new AI book ahead of me (for now!).

Anyway, I need to finish getting the downloads section ready for the book. There will be all sorts of nifty stuff in there. I hope you all find it valuable!

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content 2002-2010 by Intrinsic Algorithm L.L.C.

OGDA