IA Logo


IA Information
Communication

Dave Mark's books

Post-Play'em - Observations on Game AI


Posts Tagged ‘vision’

Skyrim: To Headlook or not to Headlook?

Friday, December 30th, 2011

While playing my initial few hours of Skyrim this week, I’ve noticed an interesting disparity in the way headlook is used by characters. While I enjoy headlook in games for the most part, there are times when it can either be overused (“why is everyone in town staring at me all the time?”) or used in such a way that it looks unnatural to the character. Here’s a video with a few examples. Watch it first because the commentary addresses the scenes within.

 

Look at Me When I’m Petting You!

First, the dog in Riverwood doesn’t seem to care that I’m there. With nothing else going on in the middle of the night, you would think that he would at least be interested in me. Instead, he just sits there staring at nothing. While that is perfectly acceptable for a dog to do (ours do it rather often), the problem arose when I actually “talked” to the dog. He barked in reponse — but wasn’t looking at me at all when he did so. He looked around randomly — but not towards me.

If you are going to make it so that the dog responds to being addressed, at least make him stand up and face you or turn his head to see you (if possible). You do this with enough other agents in the world, it shouldn’t have been hard to include a line or two of code to do it with the dog. (I have yet to test this with other animals in the world.)

Look at Me When You’re Talking To Me!

When I entered some tavern or other in Whiterun, I was immediately spoken to by the owner of the place. She was standing behind the counter. However, I noticed that she was not looking at me — just staring straight ahead. Ironically, she only looked at me when the server lady spoke to me (and mentioned her name). Then, she wouldn’t stop looking at me at all. One thing I forgot to test was to see if it was proximity based. That is, if I went father across the room, would she have stopped looking at me. All in all, with that sort of incessant tracking, I feel like I’m being targeted by and anti-aircraft gun. At least with the bar there, she had a reason to continue facing that one directly.

Move More than Your Head

Now to this clown Heimskr in Whiterun. As seems to be his job, he runs his mouth out in the town square. When I walked up to him, he started tracking me. (Grateful to have an audience, I suppose.) While it looked OK at first, at one point he raised his hands up… right between my face and his. He was still trying to look at me, but his hands were in the way. As I moved back and forth (after stopping to grab some lavender), he kept his body facing forwards and his hands raised — but his head kept tracking me.

It looked horribly unnatural to have his body pinned to one orientation — especially in a gesture that is meant to be for the audience’s benefit — and yet still move his head so much. Unlike the bartender above, he has nothing holding him to that orientation. I would like to see some pivot to the body as he addresses me that way.

Face Me Once in a While

This one is a little rough. While I commend games for not always having the agent face directly at you, my qualifier is constantly. Sure, orient your body differently as you move around! However, you aren’t being clever (or realistic) if you position your body so that it is facing away for some reason, point your head toward the player, and then leave it that way! In the video you will see 3 clips of people who happened to not be facing me directly when I started speaking to them. They didn’t orient towards me at all when they started speaking to me and didn’t fidget at all during the conversation. The result looked rather uncomfortable.

Not shown in the 3rd clip is that the speaker did reorient himself after we were jostled by someone walking rudely between us (more on that subject later). I thought that was interesting in that it triggered some sort of alignment code in the agent. Therefore, it seems that some such code exists, but just wasn’t being used.

How to Fix it

This really isn’t that difficult to fix. Not only would it look less uncomfortable, but it can be leveraged to make for far more engaging characters. All that needs be done is the following:

  1. If the player moves outside of a particular front-facing zone, pivot the body as well as the head. You don’t need to pivot to face him completely, only enough to get the head into a more natural range. For example, if the zone is 60° on either side of center and the player moves out of it, pivot so that the player is now 30° off-center instead. Try it on yourself and see how natural that feels.
  2. As you are talking to the player, reorient yourself once in a while through various shuffling animations. Again, make it so that your body is always somewhere between +/- 30° off of the centerline to the player.
  3. Don’t always stare at the player. People don’t do this in conversation. Randomize headlook more. Look around occasionally — even while you are talking but then glance back at the player directly. The points where you glance back at the player can actually be marked up with the facial animation so that you glance at the same time as important points you are trying to make. Otherwise, look around either randomly (and not all in the horizontal plane, either… up and down are OK, too!) or at other points of interest in the room. For example, people walking by, the performer in the tavern, windows and doors in the room, etc.

Anyway, these are just minor things that have jumped out at me in my initial few hours with the game. While not really bugs, certainly, the headlook issue breaks immersion for many people if it is not done naturally.

There are far more annoying things I have seen that will be addressed later.

Splinter Cell: Conviction — Last Known Position

Thursday, July 21st, 2011

I’ve been playing quite a bit more of Splinter Cell: Conviction. Thankfully, I’ve been finding the game a lot easier than I did early on. Part of it has to do with getting used to the flow of things in a more stealth-based game.

I had noted in my “First Look” post that the enemies do a decent job focusing on the “last known position” mechanic that the game is proud of. To explain, when you are sighted and then go back into cover, the game draws a black and white line art model of your position when you were last seen. This is meant to be a marker of where the enemy thinks you are. The encouraged play is to separate yourself from that point so that you can either hide elsewhere or flank and surprise them. This is a helpful reminder and is only made possible by the fact that the AI has a mental model of your position other than the old fashioned “omniscient” method that games used to use.

Again, they do a good job of searching for you—whether at your actual location or your believed one. They will use cover, approach cautiously, and when the discover you aren’t really there, start to search in what looks like a meaningful way. This is where the problem occurs, however.

The enemies will begin to search the entire area even when they should know
that you are not in those places.

In some instances, when they find that you aren’t where they thought you were, there could be many directions or places to which you could have escaped. At other times, however, there are very few options—or even only one option—that you could have selected. However, the enemies will begin to search the entire area even when they should know that you are not in those places.

One example I saw that was glaring was when I had engaged enemies from a door to their room. I never actually entered the room but rather was firing from the doorway. Eventually, outgunned as I was, it was in my best interests to retreat into the room that I had come from. As I moved away, the line art silhouette marked me as still being crouched next to the door. I watched as they came up to the door to attack where they thought I was. Upon discovering me, however, they exclaimed something or other about losing me… and began searching their own room! They had been watching the doorway the whole time and should have known that I had never crossed to their side of it. The logical conclusion should have been that I was on my side of the door. The result was that I was able to slowly re-assault their position because they were no longer wary of me in the direction of my approach. Good for my survival; bad for my suspension of disbelief.

Good for my survival; bad for my
suspension of disbelief.

The solution to this problem is to leverage “occupancy maps”. The concept has been around for a while but very much came to the game dev forefront when Damián Isla gave an excellent presentation on it at the GDC AI Summit in 2009. The idea is based on the concept of influence maps where the grid space values represent the probability that a target is there. Obviously, if you can see a location, then the target is not there. You then simply spread the probabilities over the areas that he could have gone. By selecting the highest grid value at any one time, you are, by definition, going to the most likely place that the target could be. The result is a spectacular mimicry of how people (and animals for that matter) search for something that disappeared from site.

This method is slightly more memory and processor intensive than simply having a shared (x,y,z) location for the last known position of the player. I honestly don’t know if they had resources to burn in the game. However, I have to wonder if they even thought of trying the approach. I wish they had, though. An otherwise well-done stealth based game would have been that much better for their efforts.

Call of Duty 2: "I saw that grenade coming!"

Friday, November 30th, 2007

Another quick observation from my perusal of “Call of Duty 2″ for Xbox 360. I’ve noticed rather often that, when I toss a grenade, the intended “recipients” seem to know that it is coming as soon as it leaves my hand. They will yell some German variant of “Holy crap, there is a grenade! Run, dude!” This is all well and good if they are seeing the grenade coming and would like to amscray prior to its arrival. However, the quirky behavior comes in when I am doing something like bouncing it off a wall around a corner. I will hear them shout prior to the offending pineapple even making it to the corner that I am trying to circumnavigate. In other words, they either saw the grenade through the wall, or there is some sort of mirror-like sheen on the wall that allows them to see it coming.

The programmers seem to be using a messaging architecture wherein an event triggers a reaction in all agents within range. For example, if the grenade were to explode in front of them, the explosion itself would send a message out to nearby agents saying “if in range, die violently”. Messaging architectures are opposed to polling architectures where the agent is constantly looking through his environment to see if there is something to react to. Using the same example, the agent would constantly have to scan the nearby world to see if a grenade has exploded. Since grenade explosions are relatively rare, this would be a lot of wasted CPU. As you can see, messaging architectures are more efficient for event based situations.

In this case, the landing area of the grenade is roughly calculated. If the projected landing spot is near an agent, they react appropriately. They yell out warnings and move away if necessary.

However, the programmers would have faced a quandary as to when the grenade itself should send a message. If you wait until it lands or bounces off of something, it would discount the very possible scenario that they could have seen it flying through the air in the first place. If it is triggered when it is thrown, it creates the issue I observed above – that they know it is there despite the fact that they should not be able to see it at all.

One solution (albeit not a very efficient one) would be line of sight checks along the path of the grenade. Once a grenade launch has happened – and a message dispatched to the agent – the agent would now have to do a line of sight (LOS) check. If the grenade is visible, all is well – react appropriately. If it is not visible, the agent would have to change techniques to a polling architecture specifically doing periodic LOS checks at the grenade. When you consider that there may be 10 or 20 agents (friend and foe) in the area, multiple grenades in the area, and each agent-grenade LOS check would need to be done many times per second, the computational overhead adds up very quickly.

Another potential workaround would be to create a plane that is the intersection point of the path of the grenade and the visible area of the agent. At that point, the “see grenade” event can be triggered once the grenade passes that point. That seems to be not much of an improvement since there would be as many planes as there are agents and the grenade would now have to poll the planes many times per second. The overhead would be similar to the first example.

There is another way of handling that, however. Once the plane is generated, the distance from the source to the intersection of the plane can be calculated. Since the velocity of the grenade is near constant, the time delay until the projectile reaches the plane can be established. The message can be sent with a delay of x number of frames (or any other way that the game loop timer is built) so that the message is delivered and activated at the point that the grenade would have come into sight. No polling is necessary at this point. Just like before, a single event message is sent. The only additional overhead would be creating the planes representing the lines of sight for the agents in the area. In fact, if the LOS check is successful at the beginning of the throw, the plane creation is not even necessary. If an agent is going to yell to his squadmates about the throw, you don’t need to do LOS checks for them at all – they already know. It would take some testing to find out how much overhead this eats up, but the result would be that you could do stealth/surprise grenades in a much more realistic fashion rather than the current “I saw it coming around the corner” effect.

(More observations of “Call of Duty 2″ for Xbox 360)

Call of Duty 2: Omniscience and Invulnerability

Wednesday, November 28th, 2007

I was playing Call of Duty 2 on XBox 360 last night. I’ve already gone through the full campaign on the normal difficulty and am now about halfway on “Veteran” (“…you will not survive”).

Enemy Invulnerability:
“I can’t be bothered right now.”

I was on the level where your convoy through the town in Tunisia gets ambushed. I had to start it over and over because I was getting pelted by the guys on top of the walls. I finally got down where I should hide and who I should tag first. One thing kept bugging me. There was a string of guys appearing on one wall in order. Most of them were gunners shooting at me – but there was one that had a Panzerschreck. I would try to shoot him over and over and he wouldn’t die until he had fired at one of the tanks/trucks in my convoy and blown it up. In the mean time, another one would pop up and take me down. I realized that it was pointless to try to kill this dude at all. He wasn’t going to kill me and that tank/truck was going to blow up anyway. It was part of the level.

What annoys me, however, is that it took me a number of times before I realized that, until that guy got his shot off, he was invulnerable. As soon as he was done with his job (that I was trying to prevent) he was now able to be shot. I wasted valuable time and got very frustrated by the fact that the level designers had decided that this was such a required series of actions on the level that they would break the rules.

I have found other instances where I have tried to peg some dude that was threatening me and was dead in my sights only to find that he had some sort of mission that couldn’t be stopped. That really exposes the scripting in the game. I understand why the scripting is there – and, in general, it is very well done in the game. I very much love some of the actions that happen despite the fact that I can tell you the exact line I crossed in order to trigger the action – but usually they are too fast for me to react to or something that is meant to be just watched anyway. Don’t let me point at a guy and unload an entire clip into his spleen while he doesn’t seem to care that I am there.

To me, it seems like this is a case of the “anti-sandbox” concept. Sure, a somewhat linear game like CoD isn’t meant to be a sandbox – and there are certain things that have to happen to advance the plot. That’s fine, but I always feel cheated when I can’t change the series of events even if I do the right thing to disrupt it.

Enemy Omniscience:
“Am I wearing an orange hunting vest?”

The other thing that I noticed the other day is that, while I can sneak up on some people from the side once in a while – which is very satisfying – there are other times when someone will spin 45 degrees to point right at me despite the fact that I am peeking between some crates or barrels or sandbags. It is almost as if I just barely moved into a place where the ray-trace succeeded and told the enemy that I was now visible. This is fine if I walk around the corner or pop up behind something, but it seems odd when only 2% of my body is visible. The result is that it seems like the AI is omniscient (i.e. cheating) as to my whereabouts. In some respects it creates excitement, in others frustration. I know that they aren’t trying to create Thief or Splinter Cell. Stealth isn’t the focus of CoD. Still, there are times when hiding is a requirement of the game. Don’t cheat me out of those brief moments of respite.

A possible solution to this is to measure how much of me is visible and then combine that with a coefficient based on how far “off center” I am from their current direction of vision. Perhaps another factor based on movement. I know that is a bit expensive to calculate for each AI (since their fields of view would all be different).

Another potential solution is to cast multiple rays to different parts of my body – perhaps shoulders, head and a couple of lower torso spots. If more than ONE is visible, then I can be seen. Not knowing exactly what mechanism they are using, it’s difficult to know how to improve upon it.

This is even more alarming when it is obvious that there is a bias towards firing at ME. I may have 10 squad members all hidden behind objects and taking pot shots at the enemy – or even running around in the open, but damn it if the AI doesn’t want to fire at me instead. It doesn’t happen all the time, but it is obviously biased more towards me than my squad-mates. This is an obvious design decision to make it more exciting. I understand that. However, when combined with the omniscience above, it’s kinda creepy.

[note: I've realized that waiting to write one complete writeup on a game is sometimes prohibitive - so I will write as I think of things... and tag them by game so you can find all relevant stuff]

Add to Google Reader or Homepage




Content 2002-2015 by Intrinsic Algorithm L.L.C.

OGDA