IA on AI

Posts Tagged ‘Craig Reynolds’

What Real Wolves Can Teach Us about Our AI

Thursday, October 27th, 2011

An article was recently brought to my attention. The first one I looked at was Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.

The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,

Using a computer model, researchers had each virtual “wolf” follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.

The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980’s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.

The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.

We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.

Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.

So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!

Parasites and Tri-life

Monday, December 10th, 2007

I was Stumbling through AI sites on the web when I ran across this little creation called “Parasites” by Michael Battle. It’s an app of a very simple “a-life” mechanism. It’s almost reminiscent of John Conway’s “Game of Life” in that it is a collection of very simple rules that then turns into seemingly complex looking behavior. (In fact, see below for another entry from the same site that is based on Conway’s life.)

The parasites follow the following rules:

  • Continue forward until you discover another parasite of a different colour and of equal or lesser size
  • Once a suitable target parasite is found, follow it.
  • If you manage to get close enough, take a bite out of it
  • When eating others, grow in size.
  • If size gets too big, dissolve into two child parasites of the same colour
  • If someone is eating you, shrink in size.
  • If size gets too small, die
  • The bigger you are, the faster you can move

(I recommend looking at it a few times before continuing with reading this. Remember to click on the app to reset it otherwise you will end up with parasites that are all the same.)

While I like looking at it and it is mezmerizing for a while, there are some things I would love to see added to it. Most of them come from Craig Reynolds’ work with steering behaviors.

For example, I would like to not see the parasites overlap but rather avoid each other. Also, while the parasites will turn toward and follow potential food, there doesn’t seem to be a mechanism in place to have smaller parasites run away from predators at all. Lastly, it would be nice to see some group behaviors – perhaps a certain color of parasites could tend to be in a group using flocking algorithms.

I like what Michael has done but it would be nice to see more out of this very basic simulation.


On a slightly different note, Battle has done another project that IS very much like Conway’s Life. This one, called “Tri-life” is a little less satisfactory. In it, he uses the RGB color components of the neighboring triangles in a sort of “Rock-Paper-Scisors” battle . The winner gets to propogate its color to the loser. The full rules are:

To get your head around the Combat phase, you’ll first need to know the overarching rule that Red conquers Green, Green conquers Blue and Blue conquers Red.

… Red > Green > Blue > Red …

The pseudocode for the process looks like this:

  • Figure out if I’m predominantly Red, Green or Blue by looking at my hex components.
  • Find the average colour of my three adjacent Triangles and find their predominant colour.
  • If we both have the same predominant colour, set my colour to the average between me and them.
  • If I win the fight, set my most dominant colour component to max (255) and my least dominant colour component to zero.
  • If I lose the fight, set my least dominant colour component to max and my most dominant to max

I like the attempt to add more than just “alive or dead” to the possibility space and almost incorporating a sort of genetic component. I do find it fascinating to watch the relatively stabilized blobs of color with all the fighting going on at the edges. It really reminds me of a sort of influence map.

All in all, good work and fun stuff to watch!