I’ve written about rubber-banding before over on Post-Play’em where I talked about my observations of how it is used in Mario Kart: Double Dash. Rubber-banding is hardly new. It is often a subtle mechanism designed to keep games interesting andÂ competitive. It is especially prevalent in racing games.
For those that aren’t familiar, the concept is simple. If you are doing well, the competition starts doing well. If you are sucking badly, so do they. That way, you always have a race on your handsâ€”regardless ofÂ whether you’re in first, the middle of the pack, or in back.
Sometimes, it can be more apparent than others. IfÂ competitorsÂ are teleporting to keep up, that’s a bit egregious. If you come to a dead stop in last place, and so do the other racers, that’s way too obvious. The interesting thing is that sometimes, it can be more than just about fairness and a running a good race. Sometimes it can be used because it is inherently tied to a design mechanic other than racing.
I saw this review on Thunderbolt of the new gameSplit/Secondwhere it explains this phenomenon. The game is, on the surface, a racing game. However, in a mechanism borrowed from the aforementioned Mario Kart, the gameplay heavily revolves around “power plays“. These involve triggering things like exploding cars alongside the road, crumbling buildings, and helicopters dispensing explosives. You can even trigger massive changes for your foes like changing the route entirely. Needless to say, that has the effect of annoying the piss outta your opponents. The problem comes when those opponents are not human ones, but AI.
As the Thunderbolt reviewer puts it,
Split/Secondisnâ€™t too difficult until some of the latter stages of the career, but unfair AI is a common problem throughout. Itâ€™s testament to the gameâ€™s focus on power plays that this unfair AI often occurs, since being in the lead isnâ€™t a particularly fun experience when you canâ€™t trigger the gameâ€™s main selling point. As a result, youâ€™ll often find the following pack extremely close behind, often catching up six second gaps within two. Even when you know your car is much faster and youâ€™re driving the race of your life, the AI finds a way to pass you with relative ease, performing impossibly good drifts and respawning from wrecks in the blink of an eye. Dropping from first place to fifth is such a common occurrence it would actually be quite comical if it werenâ€™t for the frustration involved. Thatâ€™s not to sayÂ Split/Second is a hard game â€“ itâ€™s usually pretty easy to wreck opponents with a decent power play, and youâ€™ll normally be given ample opportunities to pass them â€“ but the rubber band AI does cause some unwieldy races where the AI will pull ahead rather than keeping at a more realistic, surmountable distance.
As you can see above (emphasis mine), the rubber-banding is about more than keeping the pack close behind you if you are doing well. In order for the power plays to be relevant, you can’t be in the lead. You need to be behind them to use them. This isÂ analogousÂ to the “blue shell” in Mario Kart that would streak from wherever you were all the way up to the front of the pack and tumble the first-place kart. You simply can’t use the blue shell if you are in first place. In fact, the game won’t give you one if you are in first.
In Split/Second, the whole point of the game is blowing crap up and screwing with the other drivers. In fact, most of the fun of that is actually seeing it happen. While, in Mario Kart, you can use red or green shells, bananas, and fake blocks to mess with people behind you (and this is a perfectly normal tactic), the result of that isn’t the visually stunning and exciting experience that the power play in Split/Second is. Therefore, the designers of Split/Second had to make sure that you were able to use the power plays and see them in action.
In Split/Second the entire point of the rubber-banding is to make sure you aren’t in firstâ€”at least not for very long.
The difference between these two approaches is subtle. The rubber-banding in Mario Kart makes sure that, if you are in first, you can’t make a mistake without having people pass you. In Split/Second the entire point of the rubber-banding is to make sure you aren’t in firstâ€”at least not for very long.
You have to wonder how this mechanism would translate over to a shooter game, however. If rubber-banding in a shooter is to make sure you are challenged to a good fight… but one that in which you eventually triumph, a change to one where the AI is supposed to ensure that you don’t triumph would be a bit awkward. In fact, that would be negating the game’s purpose of having you see the rest of the content down the road (so to speak). AI in shooters is really cut from the same template of the rubber-banding in Mario Kart then. “Do well, and then lose.” It was just interesting to see a different take on this mechanism that generates a different outcome for the perfectly viable reason of making the game better.
Wow. A dude by the name of Shamus Young wrote a great column on the AI of Starcraft on his blog, Twenty-Sided. In fact, it would be worth of a Post-Play’em entry and I almost toyed with writing this post over there. Anyway, he wrote a script that allows 7 AI players to battle it out with the 8th slot being left for the player to control no units but observe the entire map. Often, he would let the game run overnight and see what had transpired come morning. His observations were rather interesting and served to do two things:
Point out the various strengths and weaknesses of the different races
Point out that the AI is largely a rule-based engine with not a lot of forethought, planning or strategic processing… and definitely no learning capabilities.
While, as always, I encourage you to read his article at his blog, I do want to break down a couple of his observations here.
At first I just set the difficulty to “normal”, but I found that the computer players were far too likely to consume all the resources on the map, go broke, and then just sit there. I’d start a game before going to bed, and when I came back in the morning I’d find the battle was down to three sides who couldn’t make any fighting units. I changed the difficulty to “Insane”, which auto-cheats by giving itself 2,000 minerals and gas anytime it goes broke, meaning the thing is always rolling in resources.
It is my opinion that this is a major failing of the game. He points out that on the “insane” difficulty level the AI just simply gives itself more resources if it runs out. Because of that, there is no “end game” scenario that the computer has to plan for. There is no timer ticking down for it. However, there certainly is for the player. You need to make your move before your resources go dry. However, the computer is still playing “as if” the resources are still important. Shamus goes on…
I dislike this auto-cheat for a number of reasons, mostly because it negates a lot of the strategy in the game. None of the players can go broke, but the AI still plays as though resources were important. The only true way to knock a player out of the game is to annihilate their core base with all the critical buildings in it. Expansion bases are (mostly) worthless in a game like this. Yet the computer still builds and defends expansions (because that’s what it’s programmed to do) and still wastes time attacking enemy expansion bases. This introduces a bit of luck into the game: Who wins depends a lot on positioning. The AI tends to attack the nearest base, not the most important one. Sometimes the nearest base is the core base. Sometimes the nearest is an expansion which is pointlessly destroyed and rebuilt over and over again.
Now, that may be by design. I don’t know. From a mentality standpoint, the idea could be not to secure an expansion in order to secure the resources for yourself but rather to keep them from the player(s). That certainly makes sense. I doubt, however, that was the case. It is likely that the 2000 rule was simply a cheat to make things miserable for the player. *shrug* It’s an RTS staple, I suppose.
Moving on, the next section reeks of a scripted build order.
Without the unpredictable actions of a human involved in the match, the AI fights like clockwork. Games have a perceptible rhythm to them. They all build their first couple of buildings within a few seconds of each other. Even these variations are probably the result of minor changes in the layout of each base. If the bases were somehow shaped the same, the players would probably all build in perfect unison.
They build an initial attack force. On insane difficulty this attack force is huge – probably sixteen or so zealots or the given race equivalent. On normal difficulty the force is smaller, but the behavior is the same. They all leave the base at about the same time, and (as far as I can tell) attack a foe totally at random.
That means there may be some sort of rule-based system in place. Age Of Empires/Kings did this exact same thing. All the AIs would seem like they were working in sync, but that was because there all had the same timer settings (i.e. “If Timer = 1500, launch attack”).
For the most part, I can understand doing it this way. Short of putting together some sort of GOAP system in place to sequence individual actions into a cohesive plan, the only thing left would be a random build order… which would start to look silly. One thing that he didn’t mention here is if the build order was always the same… especially in the middle phases. It would be nice to see that the computer doesn’t play an individual race the same way every time. That would be something that, right out of the gate, would start to mix things up a bit.
The interesting bit is “attack a foe at random”. I would say that is, again, just a simplicity cheat. It would be a little mathematically intensive to process it further. However, there may be something not accounted for here. Since all the AIs are in sync at that point of the game, if there IS a mathematical scoring system, they would all come out the same. Therefore, in case of ties, taking “the top of the list” may tend to look random. The obvious fix that was missing, though, is a proximity factor. This helps in two ways:
It’s quicker and more manageable to wage war with a foe that is nearby
It’s easier to recall forces into a defensive perimeter when your they aren’t across the map
Shamus touches on this:
Nobody has any defenses at this point, so when an enemy comes knocking, their survival is a matter of luck: The defender has an attack force elsewhere on the map. Is that force still intact, and can it be recalled before the place is destroyed? Are there enough of them left to save the place?
Another artifact of this “shoot anything that moves” strategy is that the AI is truly engaging all 6 of the opponents at all times rather than focusing on one or two of the most threatening (by strength or proximity).
A player can also do really well if they are attacked by two people at exactly the same time – the attackers end up wiping each other out and leaving the buildings alone.
As he notes, that turns messy and wasteful when attacking an enemy. Sure, when defending your own areas, you will want to repel anyone that comes near, however, if you are across the map attacking a base, don’t waste time with the other forces that happen to be in the area unless they attack you as well. Really, this could be handled rather well with an influence map that is laid down spreading from your buildings to determine whether you are in a “defensive” zone or not. If you are, attack anyone that comes in. If not, stick to the plan.
The AI seems to wait until this initial attack force is nearly all dead before entering the next phase: It builds defenses and another attack force. Again, this force is sent out. Once gone, the AI begins trying to build its first expansion base.
The rest of the game is a series of escalating attack waves. As they add more buildings onto their main base they work their way up to air units,
More evidence of a strictly scripted/rule-based system. There are obvious triggers here. If this is truly the case, this method could fall on its face in a number of ways.
For example, if the AI’s attack force survives a battle with just enough units to NOT trigger the next build, but the force it defeated is now in the next build (because their unit count dropped below the threshold), the AI actually starts to fall behind. An exploit would be to NOT kill the AI’s units off but just run them around the map for a while. You can build at will, but the AI is still waiting for the trigger to happen.
One interesting point that he made was regarding the difference in how the AI uses Protoss Templars vs. a human compared to vs. another AI. Check this out…
When I fight the AI Protoss, it uses the Templar Psi Storm with murderous efficiency. I’ll have a tight group of units moving into its base when a Templar will appear in juuuust the right spot, drop a Psi Storm on my guys, and dart away before I can punish him for it. With a couple of Templar available he’ll play hit-and-run with me all day, and do tremendous, infuriating damage to my forces.
Against other AI, the Templar are bumbling comic goofs. They will drop Psi Storm on single enemy units and hit a bunch of their own guys in the process. They will blunder through fortified territory attempting to reach a unit deep inside, and get cut down before they even get close.
I’ve come to suspect that the AI cheats a bit and detects clusters of units which have been grouped by hotkey by human players. This is very naughty if it’s true. What’s worse is that peeking at how my hotkeys are set up seems to be central to its decision making. Deprived of that bit of cheating info, the Templar is helplessly stupid. Boo. (empahsis mine)
Now THAT is interesting. Rather than write up some sort of grouping algorithm or process a very dynamic influence map, use something that the human player is going to likely use. If he doesn’t use hot keys on his groups, he’s a n00b anyway and could stand the help of an ineffectual AI. It would be interesting to try and play without hot-keying your forces and see if the AI Protoss can use those Templars effectively or not.
His complaints about how poor the AI is at playing Terrans are well-founded:
I’m note sure why the thing is so bad at utilizing Terrans. Aside from the issues I mention above, it just seems less aggressive overall. It also has a penchant for building base defenses (bunkers, towers) in places where a base should go, effectively rendering a viable expansion useless. It will attempt to lunch nukes without bothering to cloak the Ghost first. It will risk the painfully expensive Science Vessel in order to irradiate something of very low strategic value. It makes small numbers of all units instead of focusing on a few and using them well.
All of the above could be solved with some mathematical analysis of the units, the situations, the terrain, etc. It’s a shame that wasn’t done.
Now… keep in mind, this game was made 10 years ago. RTS AI was stagnant for the most part during that period. Age of Empires and Age of Kings had horribly inconsistent AI as well. It is my (less than perfect knowledge) opinion that RTS AI didn’t turn the corner until Empire Earth (Stainless Steel Studios) in 2001.
Anyway, Shamus’ project is fantastic and his analysis was very pleasant to read. I may have to find my Starcraft CD so I can download the zip file. I would be very interested to watch through these games and see what else I could see.
One of the more intruiging lectures of the 2008 GDC was given by Soren Johnson (MobyGames info) ex- of Firaxis and now with Maxis on the “Spore” team. He was talking about how Civ 4 fit in the spectrum of game AI between two extremes… “Good AI” and “Fun AI”. Here’s some selections from my notes on the lecture. (Forgive the seeming lack of lucidity – I was typing like a madman!)
Also, this is a direct link to Soren’s Slides (.zip) on his blog – which is where the images in this post came from (click to enlarge).
â€śGoodâ€ť AI (Play to win) Beat player at their own game Essentially a human substitute
â€śFunâ€ť AI (Play to lose) Algorithms are the content Focus on the Playerâ€™s Experience
For example, Aggro in an MMO is fun AI.
With tanks, healers, DPS (damage per second) there is a formula for handling it… Trivial AI problem to â€śsolveâ€ť by the players. Aggro determines who AI attacks, let enemy attack the tank, you heal the tankâ€¦ DPS the mobâ€¦ Everyone knows how it worksâ€¦ very predictable. Almost become commoditized with agro tools. Blizzard isnâ€™t trying to be clever â€“ they like that it is simple.
The question with AI design is, where are you trying to fit? Across the spectrum.
Chess is â€śgoodâ€ť
Starcraft is more towards â€śgoodâ€ť (no real diplomacy â€“ assumption is that they want to kill you.)
Civ IV split the gap. Deep diplomacy but very symmetrical game design.
Heroes of Might and Magic is more towards â€śfunâ€ť. Not as focused on the excellence.
Desktop tower defense if pure “fun” AI.
Rule sets? Good side tends towards fixed rule sets. (e.g. Chess) Fun side tends towards evolving rule sets.
What are the best environments? Good AI tends towards Multi-player Fun AI tends towards Single-player
Tactics available to AI? Good AI will do everything available. Fun AI will do limited tactics.
Measuring performance? Good AI has objective measurements Fun is subjective â€“ e.g. difficulty over performance
Turing test? Good AI passes Fun AI… this question is irrelevant.
The question is: â€śPlay to win or Play to lose?â€ť
With Civ IV, the AI does have limited options. There are a lot of options that they do not put on the table for the AI. Esp. with diplomacy. E.g. fighting a war… as a player, you can ask them for stuff if you promsie to quit war, then attack them over again. AI doesnâ€™t do that.
Both fixed and evolving design
Fails Turing test but it isnâ€™t irrelevant.
Every player is differentâ€¦ some want things like challenge, sandbox, narrative.
For narrative, you want to aim for personality, for the AIs to maintain memory about you. It’s OK for them to fall for traps. They built that into the leaders in Civ 4.
With regard to the challenge, you “want player to win or at least understand WHY they lost.“
Need for difficulty levels: Lets sandbox players off easy Gives Challenge players a goal Increases available tactics.
Where does cheating fit? Completely Good AI does not. Completely Fun AI n/a There is no concept of cheating (e.g. desktop tower defense) In the middleâ€¦ yes?
The Noble level in Civ 4 is the â€śeven levelâ€ť with regard to production modifiers, etc.
But Noble has other cheatsâ€¦ e.g.
Animal/Barbarian combat bonuses
No Unit support
Better Unit upgrades
No War Weariness
The AI needs more help in these areas.
For example, AI does not leave cities empty like a human wouldâ€¦ so unit support costs. Human army and AI army will never be the same size because they have to keep units in their citiesâ€¦ therefore cut the support costs for the AI since they need to have a larger army.
Cheats should NOT be linearâ€¦ certain you want to help more or less as you progress your diff. levels.
Cheats should never feel unfair! Examples from past Civs that players hated…
Civ 1, 2 Free wonders Gang up on human (In Civ 1: If year > 1900 and human in lead, declare war on human)
Civ 3, 4 Human-blind diplomacy (Never checks â€śis human?â€ť) Information cheats (they DO have info cheats â€“ most of them come down to limited dev. Resourcesâ€¦ e.g. fog of war is very expensive)
Information cheats can really backfire on you. E.g. Amphibious Assault Judo using empty port cities in Civ 3. (solved by determining random time for updating the assault target, ignore temporary data such as nearby units.)
Cheating is relative: The Tech Trading Problemâ€¦
AI must trade techs
AI must trade fairly
Human can sell techs cheaply
Only two of these can be trueâ€¦ Soâ€¦ should AI sell techs cheaply? Solution? Can AI pursue altruism?
When the AIs were trading often, it made for very even technology levels between all players rather than some groups ahead and others lagging. Everyone had everything.
Solution in Civ IV
AI can undersell by 33% butâ€¦ Tries to make up difference in gold Only trades on random turn intervals Uses same â€śRefuses Trade Withâ€ť logic as with human.
Arbitrary rules e.g. â€śI will never trade Iron Working with you.â€ť
What is the point of cheating? Are we trying toâ€¦
Write the â€śbestâ€ť AI?
Beat the human?
Designing for the AI? Can AI handle the options in the gameplay? OTOH, make sure not designing just for AI. Legitimate reason for design decision. (e.g. closed borders, enforceable peace treaties)
Traditional testing fails Automated testing helps greatly Need hard-core fans to analyze 1.5 year closed beta, peaked at 100 users, bi-weekly patches.
They used soft-coded AI: No AI scripts No enums (No â€śTempleâ€ť) Less brittle code Less predictable AI is not always a good thing.
Probabilistic Reasoning – Weights to factors, values to situation
Data-driven Mods – AI was stand-alone so that it was compiled into dll. CvGameCoreDLL.dll was 100% independent of engine.
Soren Johnson, AI guru behind Civ 3 and Civ 4, posted an interesting tidbit on his “Designer Notes” blog entitled “A Farewell to Civ“. He mentioned that since he is no longer with Firaxis (having moved over to the Spore team), his GDC lecture next week will really be his last hurrah in speaking about his work with the Civ series.
One thing’s for sure, I will be avoiding writing about Civ 4 in Post-Play’em until after I attend this lecture. One quote caught my eye:
Essentially, I will be talking about the difference between thinking of the AI as the playerâ€™s opponent and thinking of it as simply an extension of the core game design (what one might call the difference between â€śgoodâ€ť AI and â€śfunâ€ť AI). There will also be a long section on AI cheating – the bane of my existence for many years – concerning which type of cheats are acceptable to players and which type are not, using Civ as an extensive case study. Further, I hope to prove that, for Civ at least, there is no such thing as – and never could be – a â€śfairâ€ť difficulty level where the AI is playing the same game as the human. Your mileage , of course, might vary.
I don’t want to make the mistake of assuming something he did was cheating, or something I thought was cheating was actually brilliant AI work on his part!
Cheating or not, he has done some exceptional work in making Civ 3 and Civ 4 an absolute delight to play… even as an AI guy. I’m looking forward to meeting him.
Look for updates from GDC both here and in the IA News blog.
The author makes some great points. One is about the wall that developers seem to run up against: We can make for some great “live” behavior that looks new and fresh – until we run out of assets (via space on the CD or dollars in the budget) and we start to repeat those fresh behaviors. At that point, the facade is exposed and things start to get stale. It goes to show that, at least with today’s technological limitations, the Turing Test will always fail as long as there is no time limit on the test. Until we can break away from entirely designer-constructed content, our AIs will eventually expose themselves
The second point is something that is actually born from the past when we couldn’t stuff a lot of fresh content into our agents – we faked it. He points to a technique that has been used over and over: let the player’s mind be the best brush for coloring in the AI. There is a lot of power in that. However, one caveat is that we can’t necessarily tell what the player is going to be thinking. Sometimes this is good and sometimes it can make for disappointment. Still, it makes for a lot of fleshing out of the perception of our AI without a lot of effort.
The solution seems to be a constant balancing act between two extremes. What really needs to be modeled in great detail and what can I fake entirely? Aahh, such is the quandary.
Anyway, I enjoyed the reading and personally plan to keep checking on neuRAI. Good work so far!
I was browsing around on various AI sites, and I came upon a link to a presentation titled “The Illusion of Intelligence: The Integration of AI and Level Design in Halo” that was given at the Game Developer’s Conference in 2002 by Chris Butcher and Jaime Griesemer of Bungie. That was the year of my first GDC, but I can’t remember if I made it to this session or not. I tried to hit every AI session there was, but sometimes there were conflicts. I actually probably have my 2002 GDC stuff around here with the session list and map – I bet I checked it off. Enough musing about my conference history, however.
The presentation is an interesting in that it provides a peek into some of the design mindsets of the developers. They fully admit that AI cheating and “faking it” is a viable methodology – and that the player will usually buy it because they want to.
Like it or hate it, Halo was and is a prominent fixture in the game world – and this presentation gives a great peek inside for AI developers and players alike. Since it was a GDC lecture, there isn’t any serious code to wade through. Because of that, it’s something that can be digested by a wide audience with only the occasional stumble over esoteric industry terminology. It’s a good read.