IA Logo

IA Information

Dave Mark's books

IA on AI

Art vs. AI: Why Artists Have It Easier

The topic of rendering vs. AI came up in a discussion in the AI Game Programmers Guild. A point made in that discussion was that rendering programmers have it easier since they know what it is they are trying to accomplish and have a pretty good handle on the inner workings of what is going on inside the computer to produce that. We, as AI programmers, do not have that luxury since we don’t really have a guide in that respect.

This reminded me of how I had addressed it in the opening chapter of my book. Rather than bludgeon the AIGPG list with this, I figured I would paste it here in the open for everyone to see. What follows is an excerpt from Chapter 1 of Behavioral Mathematics for Game AI.

Going Beyond Looks

For purposes of full disclosure, I have to admit that I have very little talent when it comes to art. I understand some of the concepts – I can draw perspective of blocky objects using a vanishing point, for example. I can even copy an existing drawing to some extent – to this day, I have a fantastic painting of a pig that I made in 7th grade that was copied from a picture in a magazine. However, that about exhausts my talent for the medium.

Looking like a Pig

Despite my dearth of ability to perform that particular discipline, I would still feel secure in making the claim that artists in the game industry have life a bit easier than do AI programmers. After all, they can see what it is that they are supposed to be accomplishing. Before they begin to draw a pig, they can look at a pig first. They can make changes to parts of their pig that are less than accurate – in effect, fine-tuning their pig-replication abilities. They can show their picture of a pig to anyone else that wanders by and ask “what does this look like?” Unless the artist subscribes to a more Picasso-esque approach, the reaction should be “Hey! It’s a pig!” (Unfortunately, my art teacher didn’t buy my claim of being a disciple of Picasso.) People know what a pig looks like. It can be rigidly defined as a collection of expected parts. For example, everyone knows that a pig has four feet. If your pig has five feet, one of which is located in front of an inexplicably present dorsal fin, viewers will be mildly disturbed.

Artists also are comfortable with recreating the environment. A light on one side of an object makes a corresponding shadow on the other. We’ve all seen it; we all know that it would be incorrect otherwise. The ability to perform observation and criticism doesn’t simply lead to the realization that an error needs to be addressed – it often leads to the solution itself. e.g. “Make the floor darker on the side of the object opposite the light.” Even though I lack the ability to necessarily fix it properly, even I as a non-artist can often suggest the solution.

From a technical standpoint, the solutions are often fairly intuitive as well. For example, to make the color of the blue floor darker in the shadows, use a darker color of blue. To make the buildings in the distance look smaller, draw them smaller. Truly, the models of how to accomplish many of the core features of art have been somewhat solved for hundreds of years.

Acting like a Pig

In contrast, game AI provides some challenges in a number of respects. First we can’t just show our AI-enabled pig to someone and say “does this act like a pig?” The answer can’t be rattled off as easily as one regarding the look of a pig. Certainly, there are some obvious failure scenarios such as said pig flying about the room in proverbially unlikely fashion. That should tip some of us off that something is amiss. However, it is far more difficult to state for certain while watching Mr. Swine’s behavior unfold on-screen that it is, indeed, acting the way a proper pig should. There is a layer of abstraction involved that is not easy to translate through.

With the artwork, we see real life and then we see the representation of it. There is an implicit equality there. If what we see in real life doesn’t match what we see in the representation, we can determine that it is wrong. Even if equality is not reached, we are cognizant of a point of diminishing returns. We are accepting of a representation that is “pretty darn close” to what we see in reality.

When watching behavior, however, we have to pass our understanding of that behavior through a filter of judgment. “Is that the correct thing for the pig to do?” In order to answer this question of equality, we would have to have an established belief of what the real-life pig would have been doing in the first place. While we can give generalized answers to that question, none of us can state for certain that every pig will act in a certain way every time that a situation occurs.

Moving beyond pigs to behaviorally more complicated life forms (such as humans – although there may be some exceptions), the solution set gets significantly larger. As that happens, our confidence in what we believe the entity in question “should be” doing slips dramatically. While we may be more comfortable in thinking of possibilities of human behavior than those of a pig (seeing that we are, indeed, human), the very fact that there are so many subtle shades of those behaviors makes it statistically less likely that any one of them is the “right thing” to be doing.

Just as our ability to define what it is they should be doing wanes we are ever more hard-pressed to judge an artificial representation of an entity’s behavior. In Tic-Tac-Toe, it was obvious when the opponent was playing right or wrong – the ramifications were immediately apparent. In Poker, even looking over a player’s shoulder at his cards, it is often difficult to judge what their behavior “should be”. The combination of the possibility space of the game with the range of thought processes of different players makes for a staggering array of choices. The best we can come up with is “that may be a decent choice, but this is what I would do if I were him.” And that statement itself needs to be taken with a grain of salt since we may not be taking the correct – or more correct – approach ourselves.

Making Pigs Act like Pigs

What this means is that AI programmers have it tough. Unlike the artist that can see his subject and gauge the relative accuracy of his work to it, AI programmers don’t necessarily know where they are headed. Certainly, we can have ideas and wishes and goals – especially in the short run. (“I want my pig to eat at this trough.”)  We are also well aware that those can tend to backfire on us. (“Why is my stupid pig eating at that trough when it is on fire?”) However, as the complexity of our world grows, we have to realize that there may not be a goal of perfection such as the goal of photo-realism in art. Behavior is too vague and ephemeral to explain… therefore making it impossible to accurately mimic. Often the best we can do is to embrace methods that give us a good shot at coming close to something that looks reasonable.

But how do we do that without going the route of complete randomness of the Rock-Paper-Scissors player, the monotonous predictability of Tic-Tac-Toe opponent, or the rigid mindlessness of the rule-bound Blackjack dealer? Somehow we have to be able to create the mind of the Poker player. We have to approach the game from the inside of that Poker player’s psyche.

We have to embody that soul with the ability to perceive the world in terms of relevant, not relevant, interesting, dangerous. We have to give him a way to conceptualize more than just “right or wrong” – but rather shades of differentiation. Better. Worse. Not quite as bad as. We have to create for him a translation mechanism to our own thought processes. And it is this last part that is the most difficult… in order to do that, we have to do so in a language that both of us can understand – yet one that is robust enough to convey all that we perceive and ponder. And that language is thankfully one that computers know best – in fact, the only one they know – that of mathematics.

The trick is, how do we convert behavior into mathematics? It isn’t quite as simple as the WYSIWYG model of the artists. (“Make the blue floor in the shadow darker by making the blue darker.”) There is no simple translation from behaviors into numbers and formulas. (For what it’s worth, I already checked Altavista’s™ translation tool, Babel Fish. No help there!) Certainly researchers have been taking notes, conducting surveys and accumulating data for ages. That doesn’t help us to model some of the behaviors that as game developers we find necessary but are so second-nature to us that no researcher has bothered to measure it.

So we are on our own… left to our own devices, as it were. The best we can do is to observe the world around us, make our own notes and conduct our own surveys. Then, using tools to make expression, calculation and selection simpler, we can attempt to create our own interface into our hat-wearing Poker player so that he can proceed to make “interesting choices” as our proxy.

It is my hope that, through this book, I will be able to suggest to you how to observe the world and construct those mathematical models and interfaces for decision making. Will your AI pigs become model swine in all areas of their lives? I don’t know. That’s not entirely up to me. However, that result is hopefully going to be far better than if I were to attempt to paint one for you.


Leave a Reply

Add to Google Reader or Homepage

Latest blog posts:

IA News

IA on AI


Content 2002-2010 by Intrinsic Algorithm L.L.C.