IA on AI

Posts Tagged ‘Turing’

AI researchers think Rascals can pass Turing test

Sunday, March 16th, 2008

According to this article at EETimes.com, there is an AI research group at Rensselaer Polytechnic Institute that believes that they are in the process of creating an AI entity that will finally be able to pass the legendary Turing Test. Their target date is this fall. They need to use the world’s fastest supercomputer, IBM’s Blue Gene in order to get the real-time results necessary, however. Interestingly, they are partnering with a multimedia group that is designing a holodeck… yes, as in Star Trek.

“We are building a knowledge base that corresponds to all of the relevant background for our synthetic character–where he went to school, what his family is like, and so on,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project.

In order to come up with the complete personality and history, they are taking a novel approach. One of Bringsjord’s graduate students is providing his life as the model. They are in the process of putting all that data into their knowledge base. Facts, figures, family trivia and even personal beliefs from the student are what is going to make up the synthetic character.

“This synthetic person based on our mathematical theory will carry on a conversation about himself, including his own mental states and the mental states of others,” said Bringsjord.

However, before you game AI programmers get all excited about this as some sort of potential middleware product…

“Our artificial intelligence algorithm is now making this possible, but we need a supercomputer to get real-time performance.”

It looks like they are doing more than just facts and figures on the project, however. They are going to great lengths to add psychology and even a form of empathy (my word, not theirs).

The key to the realism of RPI’s synthetic characters, according to Bringsjord, is that RPI is modeling the mental states of others–in particular, one’s beliefs about others’ mental states. “Our synthetic characters have correlates of the mental states experienced by all humans,” said Bringsjord. “That’s how we plan to pass this limited version of the Turing test.”

The difference with this compared to standard ,observable facts is “second-order beliefs”. In order to do that, you have to be able to get outside of your own collection of perceptions, memories and beliefs and into the mind of others. In a demo that they put together on Second Life (which I will not bother embedding here since it is unexplained and boring), they show that they have been working on 1st order and 2nd order beliefs.

An example is, if something changes after a person leaves the room, you observe the change but they don’t, you must know that the absent person will have no knowledge of that change even though you do. Therefore, the other person’s belief is that it is actually unchanged. You have to be able to look at the world through their eyes… not just in the present tense, but by replaying the recent history and knowing that they would have no knowledge of the change that occurred.

Hell… as Soren Johnson pointed out in his GDC lecture, we can’t even afford to do individual “fog of war” for 8 or 10 computer-controlled enemies. At least not on a typical PC. Imagine trying to keep all that activity straight in a running buffer of some sort… for everything in the environment. *sigh*

I keep having to go back to what Ray Kurzweil was predicting at his GDC keynote… that there is still a logarithmic growth in capability happening in technology. Given his figures, putting this sort of depth in a computer game will definately happen in my lifetime – and perhaps in my career. Now that will be scary.

3 very different AIs in new products

Sunday, December 9th, 2007

On the blog neuRAI, a relatively new blog similar to this one and my sister blog Post-Play’em, there is an article discussing the differences in the AIs of the games Portal, Assassin’s Creed and BioShock.

The author makes some great points. One is about the wall that developers seem to run up against: We can make for some great “live” behavior that looks new and fresh – until we run out of assets (via space on the CD or dollars in the budget) and we start to repeat those fresh behaviors. At that point, the facade is exposed and things start to get stale. It goes to show that, at least with today’s technological limitations, the Turing Test will always fail as long as there is no time limit on the test. Until we can break away from entirely designer-constructed content, our AIs will eventually expose themselves

The second point is something that is actually born from the past when we couldn’t stuff a lot of fresh content into our agents – we faked it. He points to a technique that has been used over and over: let the player’s mind be the best brush for coloring in the AI. There is a lot of power in that. However, one caveat is that we can’t necessarily tell what the player is going to be thinking. Sometimes this is good and sometimes it can make for disappointment. Still, it makes for a lot of fleshing out of the perception of our AI without a lot of effort.

The solution seems to be a constant balancing act between two extremes. What really needs to be modeled in great detail and what can I fake entirely? Aahh, such is the quandary.

Anyway, I enjoyed the reading and personally plan to keep checking on neuRAI. Good work so far!

Has Captcha been broken?

Friday, November 30th, 2007

I noticed this post at the site Coding Horror. It’s mostly a site for programmers griping about each other, but this one may be of general interest as well as something for the AI community.

If you have been on the web at all, you have probably had to type in some variant of a Captcha security code developed by the whizzes at Carnegie Mellon. (Make a comment on this post if you want to see one in action.) The whole point of it is to defeat programs that will read a security code in the actual html and enter it automatically. Also, they are designed to defeat programs that would take an image of the page (or the Captcha sequence itself) and use OCR (optical character recognition) techniques to detect what characters are being used.

The post at Coding Horror refers to an article from the Wall Street Journal about how Ticketmasteris having problems with bots and scalpers. The problem seems to be that the variant of Captcha that Ticketmaster is using is just not good enough. The solutions for theirs, and many others’, is being sold by companies. However, Captcha algorithms such as those used by Google, Hotmail and Yahoo are “unbreakable”. (Notice that no financial sites are listed?!?)
Anyway, interesting read on how humans can still defeat computers at some tasks. I suppose Alan Turing would be smirking at the attempts.