IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘A-Life’

What Real Wolves Can Teach Us about Our AI

Thursday, October 27th, 2011

An article was recently brought to my attention. The first one I looked at was Wolves May Not Need to be Smart to Hunt in Packs from Discover Magazine. However, it was originally from New Scientist it seems. Both of them cite a couple of other papers via links in their respective articles. You can get the gist of what they are talking about from the text of the article, however.

The point is, they have discovered that the complex(-looking) pack hunting behaviors of wolves are actually not as complex and joined as we thought. With just a few very simple autonomous rules, they have duplicated this style of attack behavior in simulations. Specifically,

Using a computer model, researchers had each virtual “wolf” follow two rules: (1) move towards the prey until a certain distance is reached, and (2) when other wolves are close to the prey, move away from them. These rules cause the pack members to behave in a way that resembles real wolves, circling up around the animal, and when the prey tries to make a break for it, one wolf sometimes circles around and sets up an ambush, no communication required.

The comment that brought it to my attention was that biologists “discover” something that AI programmers have known for decades — the idea of flocking. Going back to Craig Reynolds seminal Boids research (from the 1980′s), we as AI programmers have known that simple rules can not only generate the look of complex behavior but that much of the complex behavior that exists in the world is actually the result of the same “simple rule” model. Even down to the cellular level in the human body — namely the human immune system — autonomous cellular behavior is driven by this mentality.

The key takeaway from this “revelation” about the wolves is not so much that wolves are not as clever as we thought, but rather that there is now legitimacy to using simpler AI techniques to generate emergent behavior. We aren’t “cheating” or cutting corners by using a simple rule-based flocking-like system to do our animal AI… we are, indeed, actually replicating what those animals are doing in the first place.

We could likely get far more mileage out of these techniques in the game space were it not for one major block — the trepidation that many developers feel about emergent behavior. For designers in particular, emergent behavior stemming from autonomous agents means giving up a level of authorial control. While authorial control is necessary and desired in some aspects of game design, there are plenty of places where it is not. By swearing off emergent AI techniques, we may be unnecessarily limiting ourselves and preventing a level of organic depth to our characters and, indeed, our game world.

Incidentally, emergent AI is not simply limited to the simple flocking-style rule-based systems that we are familiar with and that are discussed with regards to the wolves. Full-blown utility-based systems such as those that I talk about in my book are just an extension of this. The point being, we aren’t specifically scripting the behavior but rather defining meanings and relationships. The behavior naturally falls out of those rules. The Sims franchise is known for this. As a result, many people are simply fascinated to sit back and watch things unfold without intervention. The characters not only look like they are “doing their own thing” but also look like they are operating together in a community… just like the wolves are acting on their own and working as part of a team.

So take heart, my AI programmer friends and colleagues. Academic biologists may only now be getting the idea — but we’ve been heading down the right track for quite some time now. We just need to feel better about doing it!

Far Cry 2 – a little deeper look

Monday, March 24th, 2008

Back in January, I posted a snippet of a review regarding Ubisoft’s claim that the AI in Far Cry 2 was going to be seriously cutting edge. Well, apparently, Far Cry 2 was apparently a hit at the recent Play.com LIVE event at Wembley Stadium. According to this review at 411mania.com the AI was at impressive. Of course, many things can be made to be impressive in a tightly controlled demo, but I would like to at least give them the benefit of the doubt. Here’s some comments from the review:

Different factions and gangs are spread across the 50 square km environment, and it’s up to you who you befriend and who you turn into mortal enemies. Walking around the wasteland area, the guards patrolling the area didn’t seem very phased by the player’s presence, carrying on with their everyday duties, and it was at this point that the specifics of the AI became obvious. It’s evidently a very sophisticated system, as we were assured that the AI is totally non-scripted. Redding joked that the enemy AI is like “The Sims with guns,” and it was clear where he was coming from. In the small preview we were shown, guards stopped to chat amongst themselves, went off for smoke breaks, and generally busied themselves about the area. They didn’t look too happy when the player tried to steal one of their cars though! One guard in plain sight of the truck shouted out, and his comrades came running, forcing the player to hightail it out of there. Interestingly, the vehicles in the game have a detailed damage system, and you’ll have to physically get out and repair them if they take too many hits. In the demo we saw, the player tried to hijack an abandoned vehicle, which was plainly kaput, so the player had to open the bonnet and use a wrench to fix various parts. We couldn’t see LJ physically controlling this part, so I don’t know exactly how interactive this portion will be, but it’s a novel idea.

Just observing a few things here. It sounds like they are using a similar sort of A-life approach that may have been used in the game S.T.A.L.K.E.R. Although much maligned for some poor behavior, Crysis also had a lot of the very engaging “idle behaviors” that are mentioned here. They mentioned The Sims in there and I’m sure that’s a nod to the “smart objects” model that was pioneered to such great effect by that franchise.

The idea of multiple factions also lends a lot of potential depth to the character behaviors. It will be interesting to see how this is handled. One question that I would have is how the alliance system is implemented. That is, if you make an enemy of one person in a faction, do they now ALL hate you immediately? If you quietly kill one guy, do they all know or can you continue to deal with them all happy-smiley-like?

Going back to the matter of the AI, the intense gunfight around the straw huts revealed some interesting snippets about how enemies react to being shot. Location-specific damage is nothing new, but Far Cry 2 takes it one step further, as the goals and objectives of enemy AI may change according to where they’ve been hit. Shoot them in the foot, and sure, they’ll hop around in pain, but they’ll still be in good enough condition to keep plugging you with hot lead. But we were shown a scenario whereby an enemy had taken a bullet in the upper thigh/groin region and was obviously in a bad condition. His primary objective therefore changed to finding safety, and he could be seen limping off to hide in a building. Naturally, the player finished him off for good measure, but it’s refreshing to see enemies that don’t act like mere moving targets, rooted to the spot until either you die or they do. FPS fans often have to deal with this kind of AI logic, so it’s good to see enemies acting more like actual humans with real thought processes and the like.

It looks like they have hooked up not only the animations to the agent damage models, but also made a point to have the behaviors change as a result. I have seen a lot of the location damage animations lately, and occasionally we see behaviors change as a result of being under fire, but it is nice to see that people are trying to blend them both in. It is perfectly natural to expect an injured warrior to go off and try to lick his wounds to “fight another day”. How does this affect the overall gameplay, however? Will they give up if injured? Are they a factor still or do you still need to clean up all the riff-raff that is hiding around? That makes for an interesting dynamic if you now have to root out all the cowering wounded people.

Whether or not Far Cry 2 will be plagued by the same overachievment strain issues that it’s predecessor and cousin (Crysis) did remains to be seen. Prima fascia, they both looked pretty good at first only to be exposed later on. I, for one, would like to see these boundary-pushing experiments work. The AI world needs some successes in new stuff in order to open up people’s minds to even more experimentation in the future.

Good luck, Ubisoft.

AI researchers think Rascals can pass Turing test

Sunday, March 16th, 2008

According to this article at EETimes.com, there is an AI research group at Rensselaer Polytechnic Institute that believes that they are in the process of creating an AI entity that will finally be able to pass the legendary Turing Test. Their target date is this fall. They need to use the world’s fastest supercomputer, IBM’s Blue Gene in order to get the real-time results necessary, however. Interestingly, they are partnering with a multimedia group that is designing a holodeck… yes, as in Star Trek.

“We are building a knowledge base that corresponds to all of the relevant background for our synthetic character–where he went to school, what his family is like, and so on,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project.

In order to come up with the complete personality and history, they are taking a novel approach. One of Bringsjord’s graduate students is providing his life as the model. They are in the process of putting all that data into their knowledge base. Facts, figures, family trivia and even personal beliefs from the student are what is going to make up the synthetic character.

“This synthetic person based on our mathematical theory will carry on a conversation about himself, including his own mental states and the mental states of others,” said Bringsjord.

However, before you game AI programmers get all excited about this as some sort of potential middleware product…

“Our artificial intelligence algorithm is now making this possible, but we need a supercomputer to get real-time performance.”

It looks like they are doing more than just facts and figures on the project, however. They are going to great lengths to add psychology and even a form of empathy (my word, not theirs).

The key to the realism of RPI’s synthetic characters, according to Bringsjord, is that RPI is modeling the mental states of others–in particular, one’s beliefs about others’ mental states. “Our synthetic characters have correlates of the mental states experienced by all humans,” said Bringsjord. “That’s how we plan to pass this limited version of the Turing test.”

The difference with this compared to standard ,observable facts is “second-order beliefs”. In order to do that, you have to be able to get outside of your own collection of perceptions, memories and beliefs and into the mind of others. In a demo that they put together on Second Life (which I will not bother embedding here since it is unexplained and boring), they show that they have been working on 1st order and 2nd order beliefs.

An example is, if something changes after a person leaves the room, you observe the change but they don’t, you must know that the absent person will have no knowledge of that change even though you do. Therefore, the other person’s belief is that it is actually unchanged. You have to be able to look at the world through their eyes… not just in the present tense, but by replaying the recent history and knowing that they would have no knowledge of the change that occurred.

Hell… as Soren Johnson pointed out in his GDC lecture, we can’t even afford to do individual “fog of war” for 8 or 10 computer-controlled enemies. At least not on a typical PC. Imagine trying to keep all that activity straight in a running buffer of some sort… for everything in the environment. *sigh*

I keep having to go back to what Ray Kurzweil was predicting at his GDC keynote… that there is still a logarithmic growth in capability happening in technology. Given his figures, putting this sort of depth in a computer game will definately happen in my lifetime – and perhaps in my career. Now that will be scary.

AI on Second Life

Tuesday, March 11th, 2008

Well, I knew it would happen soon enough. People have started using Second Life as an AI platform. There is an article on KurzweilAI.net about a project by Rensselaer Polytechnic Institute researchers doing exactly that.

Rensselaer Polytechnic Institute researchers unveiled “Eddie,” a 4-year-old virtual child in Second Life who can reason about his own beliefs to draw conclusions in a manner that matches human children his age.

I thought the following bit was amusing, however.

This research is supported by IBM and other outside sponsors, and the team hopes to engineer a version of the Star Trek holodeck.

If you are going to be shooting for a holodeck experience, you have a long way to go from Second Life.

Here’s a link to the original article from PhysOrg.com. There’s far more information there on the nuts and bolts of it.

Peter Molyneux’s Next Game?

Monday, March 10th, 2008

OK… anyone who has ever followed Peter Molyneux and Lionhead studios knows that he is the master of hype. He can drag a significant portion of the gaming community along behind him on his words alone. This is especially relevant to AI programmers since it always seems like Peter is on some sort of cutting edge. Well, here’s an interview(-let) where Peter talks about some mysterious project (based off of the Dimitri Project) that will apparently blow the doors off of what we have seen from him so far. The only info that he gave on it in a prior interview was that you would be able to “replay your life”.

Considering that he is still incubating the egg that is Fable 2, I’m not entirely sure what to think of this. (To be fair, the previews of Fable 2 that I saw at GDC were kinda cool. More on that when I can get around to typing up my notes.)

Here’s an excerpt from the interview:

Since Black & White, we’ve been thinking a lot about AI, Lionhead was founded with that thought of AI in mind. In terms of the core or the theory of the AI, we’ve moved from Black&White onto a project called Dimitri, which I’ve been tantalizing you about for a long time. And that team kept on researching. Dimitri was always an experimental thing, which is why I never showed it.

And then it moved from that experiment to a moment in time that happened six months ago when a discovery was made, and this discovery has been so exciting that it has lead to Lionhead focusing on it and sculpting a game around that. I think that discovery is so significant… This discovery has lead us to start a game and that game will be on the front cover of Nature magazines and Science magazines.

What the heck does that mean? The blog author/interviewer pointing out that we are likely talking some sort of artificial life simulation. That makes for interesting conjecture since Spore is (allegedly) right around the corner.

From another interview:

If you think back to the one thing about Black & White that was most fascinating, you’d have to say it was that creature that learned behavior and seemed, for a certain glimpse, to be alive. Imagine if you could take that and multiply it by a billion…

Oh well. I know that I will be following whatever it is that they do there. I’m sure many others will be warily but curiously watching what comes out of the king lion’s mouth (so to speak).

A-Life, Emergent AI and S.T.A.L.K.E.R

Wednesday, March 5th, 2008

AIGameDev.com has a great, in-depth interview with Dmitriy Iassenev, the AI mastermind for S.T.A.L.K.E.R. The game has a very extensive A-life system that lends a lot of depth to the game. In Dmitriy’s words:

The gist of the A-life is that the characters in the game live their own lives and exist all the time, not only when they are in the player’s field of view. It eventually runs counter to the customary optimization processes used in games development (why perform operations invisible to the player?). Thus, such a scheme is reasonable to be used only when you know exactly what you want to have in the end. We had the game designers’ requirements to have the characters that could not only live inside a certain level, but move between the levels, memorizing the information they obtained during their existence. Consequently, we have decided that each character should come with only one logical essence regardless of the level he is at; whereas we could try to implement that with various tricks involved.

Read more of this very detailed interview over at AIGameDev.com – the place for the killer AI stuff on killer games!

Parasites and Tri-life

Monday, December 10th, 2007

I was Stumbling through AI sites on the web when I ran across this little creation called “Parasites” by Michael Battle. It’s an app of a very simple “a-life” mechanism. It’s almost reminiscent of John Conway’s “Game of Life” in that it is a collection of very simple rules that then turns into seemingly complex looking behavior. (In fact, see below for another entry from the same site that is based on Conway’s life.)

The parasites follow the following rules:

  • Continue forward until you discover another parasite of a different colour and of equal or lesser size
  • Once a suitable target parasite is found, follow it.
  • If you manage to get close enough, take a bite out of it
  • When eating others, grow in size.
  • If size gets too big, dissolve into two child parasites of the same colour
  • If someone is eating you, shrink in size.
  • If size gets too small, die
  • The bigger you are, the faster you can move

(I recommend looking at it a few times before continuing with reading this. Remember to click on the app to reset it otherwise you will end up with parasites that are all the same.)

While I like looking at it and it is mezmerizing for a while, there are some things I would love to see added to it. Most of them come from Craig Reynolds’ work with steering behaviors.

For example, I would like to not see the parasites overlap but rather avoid each other. Also, while the parasites will turn toward and follow potential food, there doesn’t seem to be a mechanism in place to have smaller parasites run away from predators at all. Lastly, it would be nice to see some group behaviors – perhaps a certain color of parasites could tend to be in a group using flocking algorithms.

I like what Michael has done but it would be nice to see more out of this very basic simulation.

Tri-Life

On a slightly different note, Battle has done another project that IS very much like Conway’s Life. This one, called “Tri-life” is a little less satisfactory. In it, he uses the RGB color components of the neighboring triangles in a sort of “Rock-Paper-Scisors” battle . The winner gets to propogate its color to the loser. The full rules are:

To get your head around the Combat phase, you’ll first need to know the overarching rule that Red conquers Green, Green conquers Blue and Blue conquers Red.

… Red > Green > Blue > Red …

The pseudocode for the process looks like this:

  • Figure out if I’m predominantly Red, Green or Blue by looking at my hex components.
  • Find the average colour of my three adjacent Triangles and find their predominant colour.
  • If we both have the same predominant colour, set my colour to the average between me and them.
  • If I win the fight, set my most dominant colour component to max (255) and my least dominant colour component to zero.
  • If I lose the fight, set my least dominant colour component to max and my most dominant to max

I like the attempt to add more than just “alive or dead” to the possibility space and almost incorporating a sort of genetic component. I do find it fascinating to watch the relatively stabilized blobs of color with all the fighting going on at the edges. It really reminds me of a sort of influence map.

All in all, good work and fun stuff to watch!

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content ©2002-2010 by Intrinsic Algorithm L.L.C.

OGDA