IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Fritz Heckel’s Reactive Teaming

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

Tags: , , , , , , , , , ,



5 Responses to “Fritz Heckel’s Reactive Teaming”

  1. Fritz Heckel says:

    Just to be clear, the video does play a little loose with technical details, primarily because it’s actually a submission to the AAAI video competition.

    By “explain” I mean that the transferring agent actually makes a copy of the behavior and passes the new object to the receiving agent (they’re all running in the same process; if they were running in different processes, I could pass just enough information to build the behavior, but it’s not necessary here). Every agent could just be tossed into the world with a complete specification, but I think we’ll get more interesting behavior by using a small number of fully specified agents and a lot of under-specified agents. I’m still playing with the idea.

    I will give two citations from AI Wisdom for who has been using subsumption:

    Loew, H., and Hinkle, C. AI Game Programming Wisdom 4. Charles River
    Media, 2008, ch. 5.5: Enabling Actions of Opportunity with a Light-Weight
    Subsumption Architecture, pp. 493–497.

    Yiskis, E. AI Game Programming Wisdom 2. Charles River Media, 2004,
    ch. 6.1: A Subsumption Architecture for Character-Based Games, pp. 329–337.

    Each of them do use subsumption architecture very differently, and the Yiskis approach is closer to what I do. In the end, because our implementation is hierarchical and behavior-based, the agent representations look very similar to those of Behavior Trees, but are executed differently.

    Anyway, I’m still working on the idea, and by the time AIIDE rolls around, I’ll have more interesting demos, which hopefully y’all will find compelling.

  2. Dave Mark says:

    Thanks for the clarifications.

    As for other games that may have used subsumption, Igor Borovikov wrote an article in AI Wisdom 3 that was based on subsumption. Also, Kevin Dill and Denis Papp spoke at AIIDE 2005 on a goal-based architecture that they used in Kohan 2. Similar anyway.

    http://www.aaai.org/Papers/AIIDE/2005/AIIDE05-006.pdf

  3. Fritz Heckel says:

    Thanks for the pointers! I had missed the Borovikov article, but I’ve got it bookmarked now.

  4. Victor Lajide says:

    Its intresting that you mentioned sports games because one area that still plagues these games are dynamic obstacle avoidance which you cant possibly plan for (In a continous continous dynamic game space ruled by schizophrenic gamers). Even in American football where downfield blocking for the ball carrier is far from consistent.

    Apart from that it could work well for social interractions within games

  5. Sander van Rossen says:

    It looks interesting, but the first problem that popped into my head was; how do you handle situations where multiple characters are doing different tasks?
    For instance, how do you decide which task is more important? Will a task of one character be overwritten, or remembered? What if you have 3 characters and 2 tasks, will you get oscillation of tasks between the characters?
    I suppose some sort of priority mechanism could help here..

Leave a Reply

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content 2002-2010 by Intrinsic Algorithm L.L.C.

OGDA