10 days in and I still have not played Skyrim. I’ve been too busy. However, that doesn’t stop me from seeing what other people have pointed out. Given that it is a PC-playable game, there are no shortage of YouTube videos out showing many of the plusses and minuses of it. (There seem to be plenty of both.) If I took the time to analyze each one that I saw, I would never get anything done and would have even less time to play it myself. That said, some things are too easy to pass up.
This video came to my attention via someone on Twitter and I thought it was worth a mention.
This is something that is so easily fixed that it is spectacular that this even occurs.
Obviously, our poor Lydia is having a difficult time with this gate trap. The problem is, she really shouldn’t. While we can understand Lydia getting whacked the first time (after all, that’s what traps are all about, right?) why is it that she persists in trying to go through the same area?Â This is something that is so easily fixed — even with likely existing tech and logic — that it is spectacular that this even occurs.
The short version of why this is happening can likely be summed up as follows:
The pathfinding engine in Skyrim is a waypoint graph rather than a navmesh. The edge that she is trying to follow runs right through the middle of that gate and, therefore, right over the trap trigger.
Even when knocked off the graph, her top movement priority is to get back on the path at the nearest node. This is why she moves to the center of the hall instead of just moving along the left (her left, our right) wall towards the player.
She has noÂ recollectionÂ of getting hit by the gate. Therefore, nothing about her processing is different in each iteration.
Even if she recalled that the gate is the problem and was able to understand that the trigger stone was the issue, on a waypoint graph she has no way to really steer around the stone anyway.
When she is stuck behind the gate against the wall, she has no realization that she is stuck… therefore, she keeps “running”.
As you can tell, this could be remedied fairly simply. First, for the pathfinding issue, a navmesh would be very helpful here. (For an excellent treatment on waypoint graphs vs. navmeshes, see Paul Tozour’s post on Game/AI, Fixing Pathfinding Once and for All.) That way, the stone could be created as a separate mesh polygon and, upon discovery, marked as as something to avoid.
Of course, the above is premised that the stone can be “discovered” inÂ theÂ first place. Certainly, Lydia managed to “discover” the stone when she got whacked the first time. What she failed to do was make a mental note (in an e– sort of way) of its existence. It is at this point that the AI started to look stupid. Again, not really all that hard to handle. In fact, by simply doing what I suggested above (marking up the navmesh as being unusable), this becomes implied by her subsequent behavior of avoiding the stone. No animations, voice barks, etc. needed. She just doesn’t do the same thing twice.
The being stuck behind the gate thing is actually a separate problem entirely and I won’t bother to address the details here. Suffice to say, however, that it is partially similar in that it is based on the notion that NPCs rarely have a sense of “futility”.
Anyway, I just thought that this was worthy of note specifically because the solution is so easy to implement. It makes me wonder why major studios can advance so much in some aspects of their AI and yet have such glaring holes in other areas. I suppose that’s why we now have the AI Game Programmers Guild and the GDC AI Summit. We are trying to share enough information between us that we are lifting the floor of game AI as a whole.
I admit that, despite it being 11/11/11, I haven’t played Skyrim. I don’t even know if I will be able to get to it for a few weeks. However, that doesn’t stop the barrage of information from people playing it. I am trying to avoid most of the breathy reports from my friends and colleagues around the internet. However, this one kept popping up on my Twitter feed so I figured I would take a look.
The title of this YouTube video is “How to steal in Skyrim.” When I clicked on it, I really didn’t know what to expect. I figured it was going to either be a boring instructional video or a blooper reel. I suppose it goes in both categories, for whatever that’s worth. However, it is an instructional video for game AI developers and designers alike.
What you are seeing is an attempt by Bethesda to mitigate a problem that has plagued RPGs since their inception — that of rampant stealing from houses and shops. Usually, one can do this right in front of people and no one seems to care. One poor solution was to mark objects as being people’sÂ possessions and alert that person when they are stolen. However, that pretty much eliminates the notion that you could steal something when that person is not around…. kind of a “page 1” lesson in the Book of Stealing, really.
Every person in the game is completely cool with me just placing large objects over their head?
What Bethesda obviously did here is perform a line-of-sight check to the player. If the player grabs something that isn’t normal, react to it. In the case of the lady at the table, she simply queries the player about what he is doing. In the case of the shopkeeper, he reacts badly when the player takes something of his. All of this is well and good.Â However, when the line of sight is blocked (in this case by putting something over their heads), they can no longer see the player taking something and, therefore, don’t react.
But what about the reaction that should take place when you put a bucket over the person’s head? Are you telling me that every person in the game is completely cool with me just placing large objects over their head? It certainly looks that way!
The lesson here is that we either can’t think of every possible action the player could perform in the game or we simply do not have the resources to deal with it — for example, by having the player protest and remove the ill-designed helmet.
“But why would the player want to do that?”
In the past (and I mean >10 years ago), when the player’s interaction with the world was simpler, many of the faux pas would have stemmed from the former reason. We just didn’t bother to think about the possible actions. The pervasive mentality was simply, “but why would the player want to do that?” Of course, players did do things like that — but given the limited worlds that we existed in, the ramifications weren’t that huge. We were far enough away from the proverbial “uncanny valley” that we simply accepted that the simulation didn’t model that sort of thing and we moved on.
Adding one mechanic to the game could have hundreds of different possible applications.
More recently, as games have allowed the player even more interaction with the world, there is a necessary exponential explosion of possible results for those actions. That is, simply adding one mechanic to the game could have hundreds of different possible applications. When you figure that game mechanics can be combined so as to intersect in the world, the potential space of resulting interactions is mind-numbingly complex. The problem then becomes, how do I account for all of this as a game developer?
One game that began simulating this stuff on an almost too deep level was Dwarf Fortress. I admit going through a DF kick last spring and it almost killed me. (It was like experimenting with powerful drugs, I suppose.) Pretty much everything in that world interacts with everything else in a realistic way. The rulebase for those interactions is spectacular. However, pretty much the only way they can pull it off is because their world is represented iconically rather than in the modern, 3D, photo-realistic, way. For DF, creating a new visual game asset is as simple as scouring the text character library for something they haven’t used yet and making a note of the ASCII code. In Skyrim (and all modern games of its ilk), the process of creating an asset and all its associated animations is slightly more involved. Or so the rumor would have it.
Given the example in the video above, DF (or any other text-based game) could simply respond, “the lady removes the bucket and yells obscenities at you.” Problem solved. In Skyrim, they would specifically have to animate removing things from their head and hope their IK model can handle grasping the bucket no matter where the physics engine has placed it.
What occurred in the video isn’t necessarily a failing of AI.
So there’s the problem. What occurred in the video isn’t necessarily a failing of AI. We AI programmers could rather simply model something like, “your messing with [my body] and I don’t like it.” It just wouldn’t do us a lot of good if we can’t model it in an appropriate way in-game.
This bottleneck can apply to a lot of things. I could represent complex emotional states on finely graduated continua, but until the animation of facial expressions and body language can be modeled quickly and to a finer degree of realism, it doesn’t do anyone any good. No one will ever see that the character is 0.27Â annoyed, 0.12fearful, and 0.63 excited.
In the meantime, rest assured that the hive mind of the gaming public will think of all sorts of ways to screw with our game. Sure, they will find stuff that we haven’t thought of. It’s far more likely, however, that we did think of it and we wereÂ simply unable to deal with it given the technology and budget constraints.
And to the gaming public who thinks that this is actually a sign of bad AI programming? Here… I’ve got a bucket for you. It looks like this:
Just saw another observation in a game review that really isn’t all that specific to the one game they were reviewing. In this case, it was a CNet review of the game, Alpha Protocol, by Obsidian Entertainment. A few of the observations about the AI wereÂ corroboratedÂ by other reviews elsewhere, but this one is the most detailed. (Most of the meat about the AI is on page 2 of the review.) The author doesn’t dance around the issue, either:
The AI is pretty dreadful. Security agents and mercenaries run about the levels in haphazard ways, may start climbing ladders as you fill them with lead, will kneel on top of exploding barrels, or might stare directly at you but fail to react unless you take a shot or give them a good punch. There’s a weird sense of randomness to your enemies’ behavior that diminishes the impact firefights may have had.
This is a common series of complaints about modern game AI. 10-20 years ago, this is mostly what we expected. Actually, to correct myself, what we usually got was enemies that turned to face us and then moved toward us in a line until we mowed them down. In the days of Doom and Quake, that was OK. Today? Not so much. Running around randomly provides a sense of activity and motion but immediately begins to trigger our sense of wrongness about the situation. This is especially noticeable as the dichotomy between excellent graphics and poor AI spreads. In any enemy that we can remotely anthropomorphize, this effect is even worse because we have an image in our head of what a human-like character should be doing in any given situation.
Where I would like to diverge from the author of the review is in the word “random”. We need to be careful with that word. Both in my book, Behavioral Mathematics for Game AIand in a number of the conference lectures I have given on the subject, I speak about the benefits of using randomness to provide variation in behavior selection. Games of all types are beginning to use these techniques as well.
For example, Richard Evans (Maxis) has spoken numerous times on the decision model for the Sims 3. The Sims go about a process of rating all the available behaviors and scoring them as to how well they match what the Sim wants or needs at the time. Then, they select from the top choices using what we call a “weighted random”. All of the actions are “in play” proportionate to their score, but the better the selection, the more likely it will be chosen. Is there randomness there? Yes, to provide variation. Does it look random? Not really. The reason is because each of the potential selections is actually fairly reasonable at the time — the result of the scoring system. To us as observers, we don’t view this as “random” — just “interesting”.
On the other hand, it seems that the behavior selection in Alpha Protocol looks random to the observer because either:
The behavior selectionÂ is, indeed,Â random, or
The behavior scoring algorithm is so poor that it doesn’t properly give the advantage to reasonable-looking actions.
Often times, behaviors will be chosen without consulting the current world state.
Either way, something is amiss. There is a 3rd option here as well. Often times, behaviors will be chosen without consulting the current world state. The “climbing ladders while you’re filling them with lead” bit might be an example of that but the observation where the enemy is seemingly unaware of you is a much better example. The bottom line is that the hardest part of designing AI is providing the adequate knowledge representation of the world state so that we can reason on it and make contextually appropriate decisions.
From a gameplay standpoint, the behaviors above can lead to some serious disappointment. To go back to the article:
Yet Alpha Protocol is no more a proper stealth game than it is a shooter. As with the shooting, the inconsistent AI provides a major hindrance… [snip]… Sneaking up on an enemy and taking him down with a minimum of fuss is mildly rewarding, as it tends to be in most games. But the actions you take leading up to that point involve activating certain skills and scurrying around in your silly crouched position–not outsmarting sharp AI or using the environment in clever ways.
With a generally random-acting AI, we aren’t outwitting anything.
We have had enough stealth games under our belt as an industry that we have primed the consumer with expectations of what to expect. Games ranging from Thief to Splinter Cell have shown us that the best part of stealth games is not just surprising an enemy… but outwitting him when he is actively trying to prevent you from surprising him. With a generally random-acting AI, we aren’t outwitting anything. The “surprise” aspect of it comes from merely staying out of his view. For all intents and purposes, that gameplay mechanic goes back to arcade shooters from the early 80’s. Haven’t we grown out of that yet?
Unfortunately, in a throw-back to my column earlier this week, the review goes on to compliment some of the other production values (though not nearly in as glowing of terms as the review of Lost Planet 2:
Alpha Protocol is not ugly, however; it’s just behind the times and artistically uninspired. Nevertheless, the safe houses Mike operates from between missions have some nice views, and some of the outdoor missions throw in some welcome flashes of color. Similarly, the sound design gets the job done, though without much style. The voice acting is at least solid, and the generic action-movie soundtrack ramps up at the right moments but otherwise stays out of the way.
Congratulations, I suppose. The problem is, our consumer public wants more than pretty pictures and nice sound. They are getting used to all of that and are now complaining about things being dumb. Again, this goes well beyond Alpha Protocol. This review can be copy-and-pasted to other games in as much of a templated fashion as trashy romance novels. There are way too many games that are missing the boat on this. Granted, as I mentioned in the Lost Planet 2 column, good AI developers are increasingly hard to find. That may be the case with Obsidian. I don’t know. (What I do know is that it’s time for me to give a ring to their HR guy.)
This is something that I have been seeing for a long time now. I’m sure we all have… and for good reason: A lot of times it is true. Despite what I said in my 7 minutes at the AI Devs Rant at the 2010 GDC about how reviewers like to bitch about bad AI,Â unfortunatelyÂ too often it is justified. The juxtaposition of an otherwise good game in other areas with poorly executed AI development is a bit more tragic, however. That doesn’t point to a case of a smaller budget game. It’s an example of a well-funded project with either a priority or a talent problem.
The storyline is thinly tied together and barely cohesive.
OK… admittedly he gripes about the storyline. A lot of times that simply because you are doing a sequel of an established IP. Moving on… (emphasis mine):
The graphics are beautiful, and the environments are varied. Lush jungle or swamp areas appear suddenly in the midst of the glacial ice fields, with waterfalls and towering trees, and then there’s parched desertscapes or weather-battered coastal regions.
Not only is the landscape new and varied, but the Akrid, the natural inhabitants of the planet E.D.N. III, are back with new shapes, new designs and a whole new set of (usually grumpy) attitudes. The scale of the largest of the creatures, the “Cat-G type” is impressive to say the least. Like,Â God of War III-scale impressive.
The voice acting, for the most part, is well-done and up to the task. The music itself, however, is excellent. Swelling orchestral pieces accentuate the action sequences, and give the game an epic feel that would have been missing if they had used some “generic rock-style track #2” soundtrack. Well done, Capcom.
Ok… so we have this love fest on the environment, the character modeling, the voice acting, and the soundtrack (he even mentions later that he would love to have a CD of the soundtrack). So what can possibly be wrong? Here’s a montage (again, emphasis mine):
The game is designed from the ground up to take advantage of four-player cooperative play. And heaven help you if you don’t have friends to play the game with. As 1UP.comÂ states, “Brain-dead, unhelpful, and unresponsive, the computer-controlled team members are a liability rather than a resource.”
You truly, truly need a human companion or three to completely appreciate whatÂ Lost Planet 2 has to offer. For example, during one big boss battle, there are four separate tasks that need to be completed simultaneously. With four humans working together, this bit of teamwork wouldn’t be too difficult. Unfortunately, if you’re playing solo with AI teammates, you’re pretty much left to a snarled tangle of frustration and trial-and-error.
The level design is adequate, but I think that too much emphasis was put on the multiplayer portion, and not enough consideration for the solo player who will be reduced to using the criminally stupid AI companions.
Damn… so we have a Left 4 Dead-style game that is based around the idea that you have to cooperate with your teammates in order to not only survive but to actually complete mandatory parts of the campaign… and yet they don’t provide you with the companions that can do so.
Early on we were all letting our enemies die quickly because we lacked the capability to make them smarter.
In the era of single-player, shooting gallery-style games, having sub-par AI wasn’t too bad. After all, our fallback mantra was “the enemy won’t live long enough to show off his AI anyway.” I knew that was just a bad crutch when we were all saying it. The truth is, early on we were all letting our enemies die quickly because we lacked the capability to make them smarter! We were actually relieved that our characters were dying quickly. That managed to fit well with our other AI mantra: “Don’t let the AI do anything stupid.” Unfortunately, the chance of the AI doing something stupid rose exponentially with the amount of time that it was visible (and alive).
We are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore.
Now that we are expecting AI teammates or squadmates or companions to come along for the ride, we have a much harder challenge. (Back in 2008, I wrote about this in my (at the time)Â regular column on AIGameDev in an article titled “The Art of AI Sidekicks: Making Sure Robin Doesn’t Suck“.) The problem is, we are spending all this money on graphics, animation, voice actors, musicians… and leaving our AI to fester like an open sore. Certainly, it takes more money and time to develop really good AI than it does to do a soundtrack, (I can speak to this, by the way… I was a professional musician a long time ago and am perfectly comfortable with anything fromÂ writing, arranging, and recording multi-track electronic grooves to penning entire sweeping orchestral scores. But that was in a previous life.) but it seems like a little effort might be called for. After all, the necessity of multi-player was built into the game design from the start… the necessity of a lush soundtrack was not.
To the defense of game companies, however, I’m very aware the good AI people are exceedingly and increasingly hard to find. The focus of the industry has changed in the past few years so that companies are trying to do better. However, that often means a lot more AI-dedicated manpower than they have. With many companies trying to find AI people all over the place, the demand has really out-stripped the supply. Some companies have had ads up for AI programmers for 6 to 9 months! They just aren’t out there.
So it isn’t always that the company doesn’t care or won’t spend the money on it. It’s often just the fact that AI is a very difficult problem that calls for a very deep skill set. Unfortunately, most of the game programs that exist really don’t even address game AI beyond “this is a state machine”. Academic AI programs are good for “real world AI” but don’t apply to the challenges that the game industry needs. Unfortunately, many academic AI institutions and their students don’t know this until they are rebuffed for suggesting very academia-steeped techniques that will fall flat in practice. (And no, a neural network would not have saved the AI in Lost Planet 2.)
So… in the mean time, here’s the suggestion: If your people don’t have the chops to make the required minimum AI, don’t design a game mechanic that needs that AI.
I was looking at a review of the new football game, Backbreaker, by the blogger, Pastpadre, and I found an interesting combination of observations. First, for those that don’t know, Backbreaker is a football game that is developed by NaturalMotion. They are known first and foremost for their Euphoria physics engine that createsÂ contextuallyÂ realistic human body motion. Seeing as one of the biggest complaints about sports gamesâ€”and football ones in particularâ€”is that the human body physics begins to look canned and repetitive, you would think NaturalMotion had a bit of a head start in that area. The problem is, that isn’t all people gripe about with football games.
While I commend NaturalMotion for attempting to move things forward in this area, there are plenty of things that need to be addressed, if not solved, if the genre is to advance further. Physics isn’t necessarily on the top of the list. But hey, that’s what they do.
…coordinating 11 people being interfered with by 11 other people is a tall order.
This is particularly close to my heart because I’m an AI designerÂ and a huge football fan. I am especially fond of football because of the deep intricacy of the team-based strategy that has to happen on every play. Of course, this is exactly the issue that is the hardest to address from an AI standpoint.Â Sports gamesâ€”again, football ones in particularâ€”areÂ ridiculouslyÂ hard to craft good AI for. For an industry that struggles to put together FPS squad tactics for 2 or 3 people, coordinating 11 people supposedly working together while being interfered with by 11 other people who are also working together is a tall order. The Madden franchise has been doing a passable job of this for some time. Sure, there are golden paths that bubble to the surface all the time, but those seem to be fewer and farther between.
Anyway, in this review, the author points out some interesting frustrations. He addresses it briefly in the first paragraph but I believe it summarizes things well (emphasis mine):
Reaction has been mixed with most gamers enjoying the Euphoria physics, polarization on the single camera angle, and the troubling CPU AI leading to the most concern.
(Brief aside: Who uses “CPU AI”? Not only is that redundant, it says the same thing twice.)
I will skip over his impressions of how Euphoria works. If you want to know all that happy-happy stuff, you can watch a Euphoria sales reel. I will address the AI-specific stuff. He goes on to comment about some of the specifics of how the AI falls flat on its face (emphasis mine).
The offensive output by the CPU has been pitiful. Iâ€™ve yet to give up more than a couple first downs on a single drive and still havenâ€™t been scored on. The biggest reason is that the CPU turns the ball over a lot. In four minutes of gameplay itâ€™s been close to an average of three picks thrown by the CPU. In the final demo video I posted I had three picks in three drives off the best offensive team in the game. That was with me being out of the play in all instances and the CPU just making bad passes.
This is summed up by the clincher which is my point here:
No matter how great the physics are I would not be able to play a football game if the CPU throws 10+ picks each time out.
No matter how great the physics are I would not be able to play a football game…
This certainly seems to be an example of tunnel-vision on a pet feature while ignoring (or being incapable of addressing) the rest of the game that people actually want to experience. Is this a Euphoria sales demo or a football game? This is something that is more prevalent in the game industry that we care to admit. It isn’t just Euphoriaâ€”or even physicsâ€”as the bad guy either. Swap in “game design”, “story”, “cool weapons”, “sexy chick outfits”, “huge environments” or whatever. AI is often the expression of your world. If your AI is broken, it severs the emotional connection to the game.
The CPU goes with a jump pass way too often, whether it be springing forward or backwards, many times resulting in an interception. These arenâ€™t instances where jumping to make a pass even would make some sense as the CPU would have been better off with their feet set.
Again, this apparently is broken decision logic. For those that don’t know football, in the pros a “jump pass” is a rare event only used in certain situations. Commentators will typically hammer on a QB for not throwing with his feet set. In fact, theoretically, you could do a football game without even including an animation for a jump pass and no one would really notice all that much.Â Therefore, for the author to notice that this happening too often is rather telling.
The CPU defensive back AI has been terrible in instances where they arenâ€™t running in stride. When they continue to run in stride they seem to play the ball pretty well. If they stop (like on a comeback route or a pass lobbed up for grabs) theyâ€™ll start to go the wrong way, make a terrible attempt at the ball, or just stand there. Several times Iâ€™ve completed passes with multiple defenders in the area who played the ball horribly wrong. Theyâ€™ve just stood there and watched the ball go over their heads or watched the receiver make an easy catch.
Again, I’m guessing this is either lazyness on the part of the development team, not knowing about football, or an inability to solve the problem. I hope it is the latter. The 2nd one is not acceptable if you are actually making a football game. The first… well…
A few more. Apparently it is not just the player AI that is troubling:
Penalties have been really iffy to say the least. Iâ€™ve seen roughing the passer called in multiple situations when it shouldnâ€™t have been. I have seen a pass interference [im]properly called in two instances, called once when there was clearly no interference, and in several other situations seen receivers taken completely out before the ball arrives and no penalty called. There also seems to be an issue with roughing the kicker (primarily on punts) where your CPU controlled players commit the penalty way too often and out of the userâ€™s control. I havenâ€™t seen this one much but it has been widely reported.
So this has to do with the logic for detecting penalty situations. These should be, in effect, simple rule-based systems. For example,
If you are botching up your static rule-based systems, then doing the contextual player-reaction AI is going to be a bitch.
Naturally, bad AI tends to lead to exploits:
Exploits have already been found with QB sneaks and the blocking of punts and field goals. These things could really damage the online play experience. The QB sneak problem, combined with the ability to no-huddle because of the lack of fatigue and not having to worry about injuries, could ruin online play. If blocked punts and kicks are prevalent online everyone will end up going for it on 4th downs.
If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”.
This is the natural result… and is always a game-killer. If there is an obvious dominant strategy, you have now taken Sid Meier’s “interesting choices” and condensed them down into “choose this to win”. Many games with bad AI could still thrive in the online world. However, in a game where you only control 9% of your team and are entirely dependent on the other 91% for success, you can still do all the right things and still get rolled. That is not fun.
My point with all of this really has very little to do with the game itself and really less to do with the Euphoria engine. In fact, a quick browse through YouTube shows that there are some people who think the AI is just fine (although watching the videos and descriptions shows that people don’t really know what AI is or what good AI might look like). That being said, your mileage may vary. My point was that of the juxtaposition of the two points the author was making:Â you need more than pretty physics to make a compelling game.
This is really only a modified version of the graphics vs. AI debates.
This is really only a modified version of the graphics vs. AI debates. Originally, studios made pretty games with bad AI (and even bad physics). Now we seem to have moved on to making better physics… and with products like Euphoria, even better physics that take the load off of AI programmers trying to figure out what human reactions should be. None of that solves stupid AI play, though. And until we do that, we are going to be seeing otherwise decent games get shelved.
In the first chapter of my book, “Behavioral Mathematics for Game AI,” I actually use Poker as a sort of “jumping off point” for a discussion on why decision-making in AI is an important part of the game design. I compare it to games such as Tic-Tac-Toe where the only decision is “if you want to not lose, move here,” Rock-Paper-Scissors where (for most people) the decision is essentially random, and Blackjack where your opponent (the dealer) has specific rules that he has to play by (i.e. < 17 hit, > 17 stand).
Poker, on the other hand (<- that's a joke), is interesting because your intelligent decisions have to incorporate the simple fact that the other player(s) is making intelligent decisions as well. It's not enough to look at your own hand and your own odds and make a decision from that. You have to take into account what the other player is doing and attempt to ferret out what he might be thinking. Conversely, your opponent must have the same perceptions of you and what it is you might be thinking and, therefore, holding in your hand.
Thankfully, there is no “perfect solution” that a Poker player can follow due to the imperfect information. However, in other games, there are “best options” that can be selected. If our agents always select the best options, not only do we run the risk of making them too difficult (“Boom! Head shot!) but they also all tend to look exactly the same. After all, put 100 people in the exact same complex situation and there will be plenty of different reactions. In fact, very few of them will choose what might be “the best” option. (I cover this extensively in Chapter 6 of my book, “Rational vs. Irrational Behavior.”
Our computerized AI algorithms, however, excel greatly at determining “the best” something. Whether it be an angle for a snooker shot (the author’s other example), a head shot, or the “shortest path” as determined by A*. After all, that is A*’s selling point… “it is guaranteed to return the shortest path if one exists.” Congratulations… you inhumanly perfect. Therefore, as the author points out in his article, generating intelligent-looking mistakes is a necessary challenge. Thankfully, in later chapters of “Behavioral Mathematics…” I propose a number of solutions to this problem that can be easily implemented.
Anyway, I find it an interesting quandary that we have to approach behavior from both sides. That is, how do we make our AI more intelligent… and, how do we make our AI less accurate? Kind of an odd position to be in, isn’t it?
OK… this is one of the indications that our sliver of the industry needs to step it up a bit. Here’s an article on GamesRadar that is chocked full of YouTube videos showing AI doing stupid things. Portrayed are some recurrent suspects such as Crysis, Assassin’s’ Creed, and Oblivion. And these aren’t rarities. Just search YouTube for variations on “stupid AI” and you can keep yourself amused and horrified for hours.
The problem is, as AI programmers, we can generally instantly say “Oh, they should have simply done this…” And yet, as a whole we continue to ship product with pathetic exploits or autonomous behaviors such as the ones depicted here. Why is that? Sure, the combinatorial explosion of situational possibilities rivals the big bang making it difficult to even account for everything much less “solve” everything. And while those situations approach infinity, our funding and ship dates are usually far more restrictive.
Or is it something more? Who knows? All I know is that I don’t want to be on the receiving end of the derision of the gaming community… and in the age of instant mass media, it’s pretty simple to become famous.
Just in case anyone has any problems, I have routed the RSS feed for this blog through Feedburner. I think I did all I was supposed to in order to get it all to work correctly… and even forward correctly for those of you that have subscribed. If you are visiting because your feed was giving an error, however, please use the links on the top right to update the new feed address.
The reason I wanted to do this was to keep track of the subscriber numbers… that counter on the Feedburner chicklet should be jumping up in the next few days as everyone checks in!