IA Logo


IA Information
Communication

Dave Mark's books

IA on AI


Posts Tagged ‘adaptive AI’

Sun Tzu as a Design Feature?

Saturday, June 5th, 2010

Total War creator, The Creative Assembly, has announced the development of the latest in the line of acclaimed RTS games, Shogun 2. While the Total War franchise has a 10-year history and is fairly well-known for its AI,  this blurb from their web site has spread through the web like an overturned ink well:

Featuring a brand new AI system inspired by the scriptures that influenced Japanese warfare, the millennia old Chinese “Art of War”, the Creative Assembly brings the wisdom of Master Sun Tsu to Shogun 2: Total War. Analysing this ancient text enabled the Creative Assembly to implement easy to understand yet deep strategical gameplay.

Sun Tzu‘s “The Art of War” has been a staple reference tome since he penned it (or brushed it… or whatever) in the 6th century B.C. It’s hard to find many legends that have made it for over 20 centuries. Its applications have been adapted in various ways to go beyond war to arenas such as business and politics. Suffice to say that “The Art of War” lives on as “things that just make sense”.

The problem I have here is that this seems to be more of a marketing gimmick than anything. After all, most of what Sun Tzu wrote should, in various forms, already be in game AI anyway.  To say Sun Tzu’s ideas are unique to him and would never have been considered without his wisdom is similar to saying that no one thought that killing was a bad idea until Moses wandered down the hill with “Thou Shalt Not Kill” on a big ol’ rock. No one stood around saying, “Gee… ya think?” Likewise, Sun Tzu’s advice about “knowing your enemy” is hardly an earth-shattering revelation.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War.

Certainly, there is plenty of game AI out there that could have benefited from a quick read of a summary of Art of War. Things like “staying in cover and waiting for the enemy to attack you” come to mind. Of course, in the game world, we call that “camping” (as an individual) or “turtling” (as a group). I can imagine a spirited argument as to whether a camping/turtling AI is necessarily What Our Players Want™, however. It certainly beats the old “Doom model” of “walk straight towards the enemy”.

And what about the Sun Tzu concept of letting your two enemies beat the snot out of each other before you jump in? (I believe there are translations that yielded “dog shit” rather than “snot” but the meaning is still clear.) If you are in an RTS and one enemy just sits and waits for the other one whack you around a little bit, it’s going to look broken. On the other hand, I admit to doing that in free-for-all Starcraft matches… because it is a brutal tactic!

The problem I have with their claim is that we already do use many of his concepts in game AI.

The problem I have with their claim, however, is that there are many concepts in the Art of War that we already do use in game AI. By looking at Sun Tzu’s chapter headings (or whatever he called them) we can see some of his general ideas:

For ease of reference, I pillage the following list from Wikipedia:

  1. Laying Plans/The Calculations
  2. Waging War/The Challenge
  3. Attack by Stratagem/The Plan of Attack
  4. Tactical Dispositions/Positioning
  5. Energy/Directing
  6. Weak Points & Strong/Illusion and Reality
  7. Maneuvering/Engaging The Force
  8. Variation in Tactics/The Nine Variations
  9. The Army on the March/Moving The Force
  10. The Attack by Fire/Fiery Attack
  11. The Use of Spies/The Use of Intelligence

Going into more detail on each of them, we can find many analogues to existing AI practices:

Laying Plans/The Calculations explores the five fundamental factors (and seven elements) that define a successful outcome (the Way, seasons, terrain, leadership, and management). By thinking, assessing and comparing these points you can calculate a victory, deviation from them will ensure failure. Remember that war is a very grave matter of state.

It almost seems to easy to cite planning techniques here because “plans” is in the title. I’ll go a step further then and point out that the practice of collecting information and assessing the relative merits of the selection, you can determine potential outcomes or select correct paths of action. This is a common technique in AI decision-making calculations. Even the lowly min/max procedure is, in essence simply comparing various potential paths through the state space.

Waging War/The Challenge explains how to understand the economy of war and how success requires making the winning play, which in turn, requires limiting the cost of competition and conflict.

This one speaks even more to the min/max approach. The phrase “limiting the cost of competition and conflict” expresses the inherent economic calculations that min/max is based on. That is, I need to get the most bang for my buck.

Attack by Stratagem/The Plan of Attack defines the source of strength as unity, not size, and the five ingredients that you need to succeed in any war. In order of importance attack: Strategy, Alliances, Army, lastly Cities.

Any coordinating aspects to the AI forces falls under this category. For example, the hierarchical structure of units into squads and ultimately armies is part of that “unity” aspect. Very few RTS games send units into battle as soon as they are created. They also don’t go off and do their own thing. If you have 100 units going to 100 places, you aren’t going to have the strength of 100 units working as a collection.  This has been a staple of RTS games since their inception.

Tactical Dispositions/Positioning explains the importance of defending existing positions until you can advance them and how you must recognize opportunities, not try to create them.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”.

Even simply including cover points in a shooter game can be thought of as “defending existing positions”. More importantly, individual or squad tactics that do leapfrogging, cover-to-cover, movement is something that has been addressed in various ways for a number of years. Not only in FPS games do we see this (e.g. F.E.A.R.), but even in some of the work that Chris Jurney did originally in Company of Heroes. Simply telling a squad to advance to a point didn’t mean they would continue on mindless of their peril. Even while not under fire, they would do a general cover-to-cover movement. When engaged in combat, however, there was a very obvious and concerted effort to move up only when the opportunity presented itself.

This point can be worked in reverse as well. The enemies in Halo 3, as explained by Damián Isla in his various lectures on the subject, defend a point until they can no longer reasonably do so and then fall back to the next defensible point. This is a similar concept to the “advance” model above.

Suffice to say, whether it be advancing opportunistically or retreating prudently, this is something that game AI is already doing.

Energy/Directing explains the use of creativity and timing in building your momentum.

This one is a little more vague simply because of the brevity of the summary on Wikipedia. However, we are all well aware of how some games have diverged from the simple and stale “aggro” models that were the norm 10-15 years ago.

Weak Points & Strong/Illusion and Reality explains how your opportunities come from the openings in the environment caused by the relative weakness of your enemy in a given area.

Identifying the disposition of the enemy screams of influence mapping…

Identifying the disposition of the enemy screams of influence mapping—something that we have been using in RTS games for quite some time. Even some FPS and RPG titles have begun using it. Influence maps have been around for a long time and their construction and usage are well documented in books and papers. Not only do they use the disposition of forces as suggested above, but many of them have been constructed to incorporate environmental features as Mr. Tzu (Mr. Sun?) entreats us to do.

Maneuvering/Engaging The Force explains the dangers of direct conflict and how to win those confrontations when they are forced upon you.

Again, this one is a bit vague. Not sure where to go there.

Variation in Tactics/The Nine Variations focuses on the need for flexibility in your responses. It explains how to respond to shifting circumstances successfully.

This is an issue that game AI has not dealt with well in the past. If you managed to disrupt a build order for an RTS opponent, for example, it might get confused. Also AI was not always terribly adaptive to changing circumstances. To put it in simple rock-paper-scissors terms, if you kept playing rock over and over, the AI wouldn’t catch on and play paper exclusively. In fact, it might still occasionally play scissors despite the guaranteed loss to your rock.

Lately, however, game AI has been far more adaptive to situations. The use of planners, behavior trees, and robust rule-based systems, for example, has allowed for far more flexibility than the more brittle FSMs allowed for. It is much harder to paint an AI into a corner from which it doesn’t know how to extricate itself. (Often, with the FSM architecture, the AI wouldn’t even realize it was painted into a corner at all and continue on blissfully unaware.)

The Army on the March/Moving The Force describes the different situations inf them.

[editorial comment on the above bullet point: WTF?]

I’m not sure to what the above refers, but there has been a long history of movement-based algorithms. Whether it be solo pathfinding, group movement, group formations, or local steering rules, this is an area that is constantly being polished.

The Attack by Fire/Fiery Attack explains the use of weapons generally and the use of the environment as a weapon specifically. It examines the five targets for attack, the five types of environmental attack, and the appropriate responses to such attack.

For all intents and purposes, fire was the only “special attack” that they had in 600 BC. It was their BFG, I suppose. Extrapolated out, this is merely a way of describing when and how to go beyond the typical melee and missile attacks. While not perfect, actions like spell-casting decisions in an RPG are not terribly complicated to make. Also, by tagging environmental objects, we can allow the AI to reason about their uses. One excellent example is how the agents in F.E.A.R. would toss over a couch to create a cover point. That’s using the environment to your advantage through a special (not typical) action.

The Use of Spies/The Use of Intelligence focuses on the importance of developing good information sources, specifically the five types of sources and how to manage them.

The interesting point here is that, given that our AI already has the game world at its e-fingertips, we haven’t had to accurately simulate the gathering of intelligence information. That has changed in recent years as the technology has allowed us to burn more resources on the problem. We now regularly simulate the AI piercing the Fog of War through scouts, etc. It is only a matter of time and tech before we get even more detailed in this area. Additionally, we will soon be able to model the AI’s belief of what we, the player, know of its disposition. This allows for intentional misdirection and subterfuge on the part of the AI. Now that will be fun!

Claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading…

Anyway, the point of all of this is that, while claiming to use Sun Tzu’s “Art of War” makes for good “back of the box” reading, much of what he wrote of we as game AI programmers do already. Is there merit in reading his work to garner a new appreciation of how to think? Sure. Is it the miraculous godsend that it seems to be? Not likely.

In the mean time, marketing fluff aside, I look forward to seeing how it all plays out (so to speak) in the latest Total War installment. (Looks like I might get a peek at E3 next week anyway.)

Rubber-banding as a Design Requirement

Monday, May 31st, 2010

I’ve written about rubber-banding before over on Post-Play’em where I talked about my observations of how it is used in Mario Kart: Double Dash. Rubber-banding is hardly new. It is often a subtle mechanism designed to keep games interesting and competitive. It is especially prevalent in racing games.

For those that aren’t familiar, the concept is simple. If you are doing well, the competition starts doing well. If you are sucking badly, so do they. That way, you always have a race on your hands—regardless of whether you’re in first, the middle of the pack, or in back.

Sometimes, it can be more apparent than others. If competitors are teleporting to keep up, that’s a bit egregious. If you come to a dead stop in last place, and so do the other racers, that’s way too obvious. The interesting thing is that sometimes, it can be more than just about fairness and a running a good race. Sometimes it can be used because it is inherently tied to a design mechanic other than racing.

I saw this review on Thunderbolt of the new game Split/Second where it explains this phenomenon. The game is, on the surface, a racing game. However, in a mechanism borrowed from the aforementioned Mario Kart, the gameplay heavily revolves around “power plays“. These involve triggering things like exploding cars alongside the road, crumbling buildings, and helicopters dispensing explosives. You can even trigger massive changes for your foes like changing the route entirely. Needless to say, that has the effect of annoying the piss outta your opponents. The problem comes when those opponents are not human ones, but AI.

As the Thunderbolt reviewer puts it,

Split/Secondisn’t too difficult until some of the latter stages of the career, but unfair AI is a common problem throughout. It’s testament to the game’s focus on power plays that this unfair AI often occurs, since being in the lead isn’t a particularly fun experience when you can’t trigger the game’s main selling point. As a result, you’ll often find the following pack extremely close behind, often catching up six second gaps within two. Even when you know your car is much faster and you’re driving the race of your life, the AI finds a way to pass you with relative ease, performing impossibly good drifts and respawning from wrecks in the blink of an eye. Dropping from first place to fifth is such a common occurrence it would actually be quite comical if it weren’t for the frustration involved. That’s not to say Split/Second is a hard game – it’s usually pretty easy to wreck opponents with a decent power play, and you’ll normally be given ample opportunities to pass them – but the rubber band AI does cause some unwieldy races where the AI will pull ahead rather than keeping at a more realistic, surmountable distance.

As you can see above (emphasis mine), the rubber-banding is about more than keeping the pack close behind you if you are doing well. In order for the power plays to be relevant, you can’t be in the lead. You need to be behind them to use them. This is analogous to the “blue shell” in Mario Kart that would streak from wherever you were all the way up to the front of the pack and tumble the first-place kart. You simply can’t use the blue shell if you are in first place. In fact, the game won’t give you one if you are in first.

In Split/Second, the whole point of the game is blowing crap up and screwing with the other drivers. In fact, most of the fun of that is actually seeing it happen. While, in Mario Kart, you can use red or green shells, bananas, and fake blocks to mess with people behind you (and this is a perfectly normal tactic), the result of that isn’t the visually stunning and exciting experience that the power play in Split/Second is. Therefore, the designers of Split/Second had to make sure that you were able to use the power plays and see them in action.

In Split/Second the entire point of the rubber-banding is to make sure you aren’t in first—at least not for very long.

The difference between these two approaches is subtle. The rubber-banding in Mario Kart makes sure that, if you are in first, you can’t make a mistake without having people pass you. In Split/Second the entire point of the rubber-banding is to make sure you aren’t in first—at least not for very long.

You have to wonder how this mechanism would translate over to a shooter game, however. If rubber-banding in a shooter is to make sure you are challenged to a good fight… but one that in which you eventually triumph, a change to one where the AI is supposed to ensure that you don’t triumph would be a bit awkward. In fact, that would be negating the game’s purpose of having you see the rest of the content down the road (so to speak). AI in shooters is really cut from the same template of the rubber-banding in Mario Kart then. “Do well, and then lose.” It was just interesting to see a different take on this mechanism that generates a different outcome for the perfectly viable reason of making the game better.

Fritz Heckel’s Reactive Teaming

Tuesday, May 25th, 2010

Fritz Heckel, a PhD student in the Games + Learning Group at UNC Charlotte, posted a video (below) on the research he has been doing under the supervision of G. Michael Youngblood. He has been working on using subsumption architectures to create coordination among multiple game agents.

When the video first started, I was a bit confused in that he was simply explaining a FSM. However, when the first character shared a state with the second one, I was a little more interested. Still, this isn’t necessarily the highlight of the video. As more characters were added, they split the goal of looking for a single item amongst them in that they parsed the search space.

This behavior certainly could be used in games… for example, with guards searching for the player. However, this is simply solved using other architectures. Even something as simple as influence mapping could handle this. In fact, Damián Isla’s occupancy maps could be tweaked accordingly to allow for multiple agents in a very life-like way. I don’t know what Fritz is using under the hood, but I have to wonder if it isn’t more complicated.

Obviously, his searching example was only just a simple one. He wasn’t setting out to design something that allowed people to share a searching goal, per se. He was creating an architecture for cooperation. This, too, has been done in a variety of ways. Notably, Jeff Orkin’s GOAP architecture from F.E.A.R. did a lot of squad coordination that was very robust. Many sports simulations do cooperation — but that tends to be more playbook-driven. Fritz seems to be doing it on the fly without any sort of pre-conceived plan or even pre-known methods by the eventual participants.

From a game standpoint, it seems that this is an unnecessary complication.

In a way, it seems that the goal itself is somewhat viral from one agent to the next. That is, one agent in effect explains what it is that he needs the others to do and then parses it out accordingly. From a game standpoint, it seems that this is an unnecessary complication. Since most of the game agents would be built on the same codebase, they would already have the knowledge of how to do a task. At this point, it would simply be a matter of having one agent tell the other “I need this done,” so that the appropriate behavior gets switched on. And now we’re back to Orkin’s cooperative GOAP system.

On the whole, a subsumption architecture is an odd choice. Alex Champandard of AIGameDev pointed out via Twitter:

@fwph Who uses subsumption for games these days though? Did anyone use it in the past for that matter?

That’s an interesting point. I have to wonder if, as is the case at times with academic research, it is not a case of picking a tool first and then seeing if you can invent a problem to solve with it. To me, a subsumption architecture seems like it is simply the layered approach of a HFSM married with the modularity of a planner. In fact, there has been a lot of buzz in recent years about hierarchical planning anyway. What are the differences… or the similarities, for that matter?

Regardless, it is an interesting, if short, demo. If this is what he submitted to present at AIIDE this fall, I will be interested in seeing more of it.

This is why we improve AI…

Sunday, February 22nd, 2009

Happened to see a link to this blog post: That’d be nice; Better A.I.

The author is not alone… many players want what he lists here. And they are getting rather vocal about it. I’m just not so sure that the game companies and the publishers “get it” yet.

I took a moment to assure him that we, as AI programmers, hear his plaintive cry. This, my colleagues, is why we formed the AI Game Programmers Guild and why we are holding the AI Summit at GDC.

Careful what you wish for…

Sunday, August 10th, 2008

I happened across this blog review of the game “Days of Ruin” for Nintendo DS the other day. I noticed it because my Google alert for “GAME AI” picked up the following sentence.

The game’s AI is sneaky and -well- intelligent too.

Well, that was certainly a promising hook, so I investigated further. The paragraph continued:

It responds to the type of units you create and your strategy. If you make an army of wartanks, it’ll create an army of anti-tanks to counter your offense. If you make bombers to get rid of his anti-tanks, it’ll create fighters to take down your bombers. If you happen to have an artillery bombardment as a defensive strategy, it will try to keep its ground units away from your range and will cautiously try to take it down when you drop your guard.

My thought was… uh, OK. Hardly sneaky or intelligent. Actually, that type of balancing act is rather simple to accomplish. And then I read the next paragraph…

While the game is fun, it also comes with a bit of frustration. In addition to a really smart AI, your computer opponent also has an unfair advantage of extra units, money, and bigger guns. Making the game, in some chapters, nigh unbeatable until you find the perfect strategy. I swear I was stuck in 2, probably 3, chapters for weeks. And without an option to change the difficulty of the game, I almost gave up on it after a lot of my strategies have failed.

This is the price you pay for that simplicity. While it is easy to do the reaction-based AI that was alluded to at first, it is a little more important to be able to tweak it to not be a challenge all the time.

AI researchers think Rascals can pass Turing test

Sunday, March 16th, 2008

According to this article at EETimes.com, there is an AI research group at Rensselaer Polytechnic Institute that believes that they are in the process of creating an AI entity that will finally be able to pass the legendary Turing Test. Their target date is this fall. They need to use the world’s fastest supercomputer, IBM’s Blue Gene in order to get the real-time results necessary, however. Interestingly, they are partnering with a multimedia group that is designing a holodeck… yes, as in Star Trek.

“We are building a knowledge base that corresponds to all of the relevant background for our synthetic character–where he went to school, what his family is like, and so on,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project.

In order to come up with the complete personality and history, they are taking a novel approach. One of Bringsjord’s graduate students is providing his life as the model. They are in the process of putting all that data into their knowledge base. Facts, figures, family trivia and even personal beliefs from the student are what is going to make up the synthetic character.

“This synthetic person based on our mathematical theory will carry on a conversation about himself, including his own mental states and the mental states of others,” said Bringsjord.

However, before you game AI programmers get all excited about this as some sort of potential middleware product…

“Our artificial intelligence algorithm is now making this possible, but we need a supercomputer to get real-time performance.”

It looks like they are doing more than just facts and figures on the project, however. They are going to great lengths to add psychology and even a form of empathy (my word, not theirs).

The key to the realism of RPI’s synthetic characters, according to Bringsjord, is that RPI is modeling the mental states of others–in particular, one’s beliefs about others’ mental states. “Our synthetic characters have correlates of the mental states experienced by all humans,” said Bringsjord. “That’s how we plan to pass this limited version of the Turing test.”

The difference with this compared to standard ,observable facts is “second-order beliefs”. In order to do that, you have to be able to get outside of your own collection of perceptions, memories and beliefs and into the mind of others. In a demo that they put together on Second Life (which I will not bother embedding here since it is unexplained and boring), they show that they have been working on 1st order and 2nd order beliefs.

An example is, if something changes after a person leaves the room, you observe the change but they don’t, you must know that the absent person will have no knowledge of that change even though you do. Therefore, the other person’s belief is that it is actually unchanged. You have to be able to look at the world through their eyes… not just in the present tense, but by replaying the recent history and knowing that they would have no knowledge of the change that occurred.

Hell… as Soren Johnson pointed out in his GDC lecture, we can’t even afford to do individual “fog of war” for 8 or 10 computer-controlled enemies. At least not on a typical PC. Imagine trying to keep all that activity straight in a running buffer of some sort… for everything in the environment. *sigh*

I keep having to go back to what Ray Kurzweil was predicting at his GDC keynote… that there is still a logarithmic growth in capability happening in technology. Given his figures, putting this sort of depth in a computer game will definately happen in my lifetime – and perhaps in my career. Now that will be scary.

NFL Tour AI comes up a yard short?

Saturday, February 9th, 2008

I haven’t gone out of my way to look for reviews on NFL Tour. This one, however, popped the alarm on my Google Alerts.

Gaming Today Impressions Of NFL Tour (360)

Here’s the quote that got me:

Unfortunately, the gameplay is just way too simple. The game is so simple to play that a monkey could easily score a touchdown on the lacking AI defense of the computer. If you want to win games in NFL Tour, just snap the ball and pass to anyone. Don’t worry… unless you are terrible, there is no such thing as an incompletion. I put in about 6 hours of gameplay and my QB accuracy is 100%.

I guess that really shows that it takes a team like the Madden/EA crowd 10… 20… 30 years of working on a franchise before they can get AI to what it needs to be (almost?). Sports AI will always be a challenge… and this title looks like they are trying to prove it.

For more scathing comments on NFL Tour, hit these…

I don’t do game reviews… but I think the consensus speaks loads.

David Braben on dynamic stories in games

Thursday, January 10th, 2008

Gamasutra recently posted an interview with David Braben (notably a co-writer of “Elite” from the 80s as well as other games). In it, he discusses his upcoming game “The Outsider” where they are working on expanding the concept of dynamic story generation beyond the “branching storyline” feel of many of today’s games.

A selection from page 1 of 4 of the interview:

Most of [the companies that have moved gameplay forward] are quite subtle. We’ve certainly seen things like Oblivion where you’ve got all the side quests that make the world feel a lot better.

The Darkness touched on that a little bit as well, and quite a few games have elements of what you might call ‘side gameplay’ that help feed into the richness, but they don’t fundamentally alter the story: games like Deus Ex where you had branching story, and there was some slight branching in games like Indigo Prophecy. So, I think all of those things are positive, but a lot of them felt, to me, like they hadn’t done the trick.

The problem is, I felt they didn’t quite deliver on their promise. Their promise is not actually the fact that you can play it through and have a different story, because that sounds fundamentally irrelevant — you play a game through and think, “So what, I could have done things slightly differently”. That’s not the point. I find that once you try playing games in a slightly contrary way, you end up finding a lot of blind alleys, things that you just can’t do, which I think is tragic. If you offer that promise, you’ve got to deliver on it.

So it’s not so much the fact of the story being able to go lots of different ways. It’s the fact that you can try a lot of different things and you’ll find a way through. It may not be what you anticipated, but there is a way through. I think it’s that sort of thing — being able to experiment with the world in a fun way.

I would agree that there are a lot of things that could be done to move away from the linearity of gameplay in games. Certain titles that offer sidequests give the appearance of this as Braben mentions.

I played through most of Neverwinter Nights 1 and 2 – and did my utmost to complete all the side-quests. But I was well aware of the fact that a designer had dropped these quests into the game all over. They were smaller carrots than the main theme of the game, but they were dangling veggies nonetheless. I was also well aware of when I had completed all of them and had to get back about the business of the main plotline. Sure, I could wander about the cities and wilderness aimlessly like I was out for a stroll, but no one would have anything to do with me unless there was a quest attached to them. So what was the point? At that stage, I was merely procrastinating with what I was “supposed to be doing” as concevied and presented by the design team.

I think that about the closest we have to open game play these days is RTS and TBS games. Civ 4 is my latest obsession research project. All it does is give me the rules of the world and a variety of potential end goals (note: not just ONE end goal). After that, it turns me loose to do whatever I want. There is no string I have to follow through the maze.

The Sims, Sim City, and other “god games” are similar. “Here’s your sandbox – go make something.” But how does this get mapped succesfully over to the RPG – or even FPS genre? Heck, it took years for the MMO world to get over the chorus of “but what am I supposed to do? What’s the story?” The meek answer from the industry was “uh… make your own story…?

Part of the process will have to be making gamers comfortable with the concept. There are many people who want to be told exactly what to do next. They don’t want to think – they want to act. Until that mentality is softened up a bit, any game that lacks that linear component runs the risk of being critically panned by the media and gamers alike.

It looks like Braben addresses this somewhat in “The Outsider”.

The actual problem is, when you start making a story very flexible, you’re putting your hand in a mincing machine from a design point of view.

But also, you have to cater for a lot of different types of play style. There are still the sort of people who want a brain-off experience, and I think that’s a good thing — I don’t think that’s a criticism. You don’t want to have to think, “Oh, what am I supposed to do now,” because that’s the flipside of this, the unspoken problem.

[Objectives] should still be really obvious, but there’s something nice about when you go through doing what you’re told, and you think, “Wait a second, this isn’t quite right!” And it’s that same element with Outsider where you’ve got corruption, that it’s really quite interesting. Now, you can play through the [straightforward] route, and you end up with quite an interesting ending, but you can also break off at any second, and start questioning why things are happening the way they’re happening.

So really I like where he says he’s going with the game. It will be interesting to see how the implementation plays out (so to speak).

Soccer game with adaptive AI?

Friday, November 30th, 2007

According to reviews of “Pro Evolution Soccer 2008” from Konami, there is now an adaptive AI component. This system, called “TeamVision”, aparently learns the way you play. If you always use the same tactics, it will start taking advantage of them – both on offense and defense. Every review site I have looked at has a similar blurb to that effect, but I have yet to see any technical description of how they are accomplishing it. More research is necessary!

If you have run across information on this game and specifically the “TeamVision” adaptive AI system from Konami, please let me know so I can follow up.

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI

Post-Play'em




Content ©2002-2010 by Intrinsic Algorithm L.L.C.

OGDA