PDA

View Full Version : Virtual Senses


Cognisant
7th-May-2014, 04:54 PM
Our visual array gives us a tremendous amount of data which our brains have to use a great amount of processing power to interpret because without interpretation this data is meaningless, no single photosensitive cell can tell directly what it's looking at, only by collating the input from an array of such cells can a meaningful image be formed.

Artificial intelligences don't have nearly enough processing power to deal with visual input from the real world in a useful manner, to us they're either hopelessly blind, dumb or too complex and expensive to do what a human could do better and cheaper anyway. In games this problem is solved by giving the AI virtual senses, hence why they can either see through foliage like it isn't there or can't see you if you're crouched in a shadow only a meter or so away from them. Instead of seeing your character the game AI sees a point in 3D space, that is if certain prerequisite conditions are met such as an unbroken line of sight, you're not crouching in a shadow, you're not using active camouflage, etc.

Point is the game AI doesn't see anything it just knows where you are, there's no sensory interpretation it just gets given the knowledge directly, this is why in a virtual world FPS bots can run around and act almost intelligently as human players even though they only posses a minute fraction of a human's processing power. Now this same principle could be used to teach a learning AI abstract concepts without it requiring anywhere near as much processing power as the typical cognitive development research project does.

I'm still figuring out specifics...

Cognisant
7th-May-2014, 05:02 PM
Basically the idea is to reduce input to binary states, imagine someone in a room with a bunch of LEDs on the wall, they know if a certain one lights up they'll get a reward because that's what happened every time it lit up in the past so what they're trying to figure out is how to make that reward light activate by responding to input (the other lights) with output (a control panel in front of them).

Lets say this person is in a maze with four directions they can go like one of those old text based games, so they can go North, South, East, and West, now we can attribute each direction a light and whether that light is on or off tells the person whether there is a wall preventing their progress in that direction. Of course the person doesn't immediately know this but by experimentation they can deduce it, probably the first thing they'll realize is that if a certain light is lit up pressing a certain button won't change the lights in any way, this is a simple association by direct observation.

Cognisant
7th-May-2014, 05:10 PM
Eventually this person will figure out all four lights and stop running into walls, indeed with practice he may begin recognizing landmarks in the maze such as a hallway that goes for three button presses, of course he isn't aware of it as a hallway because for all he knows the maze is 2D but all the same he won't keep going down the same paths if he knows they don't lead to his prize.

Cognisant
7th-May-2014, 05:16 PM
This principle of simple association can be built upon to make more complex associations, the lights could flash and the rate at which they flash tell him if there's a hallway in front of him how far it is, in this way he could navigate wide open spaces without getting lost because he'll be able to "see" his environment insofar as it's within his direct line of sight.

We could also introduce more lights such as ones that let him see diagonally, indeed we could even give him a full top down view of the entire maze although if we gave him all these lights at once or worse still all of them thus far at the very beginning it would be a lot harder for him to deduce what they mean, too much too fast might even leave him totally baffled.

Cognisant
7th-May-2014, 05:27 PM
But if we start with a few and slowly work our way up in complexity we could teach him by simple association how to read a vast number of lights, in the programming world this is called bootstrapping (not entirely sure why) but we have all experienced it ourselves as education, the fact is right now you're only able to interpret these letters as words and meaningful sentences because somebody taught you, bit by bit.

Just as we were taught how to read words our AI guy can be taught to interpret several lights at once, perhaps we could teach him mathematics expressed in binary (it would not be easy) or to see a virtual 3D wall expressed as its dimensions and orientation relative to him, this is still a lot less data than we would have to interpret and whereas our vision is a bit hit and miss (we wouldn't know the exact dimensions of the wall until we measure it) he would have perfect knowledge of it, in that way this kind of vision without interpretation is actually far superior to our own.

At least in a virtual world where absolute forms actually exist :rolleyes:

Edit: I find the idea of an NPC of incredible grace and perfect vision dealing with a bumbling human avatar an appealing juxtaposition, it would be like if a god came to visit you as one of the three stooges.

Cognisant
7th-May-2014, 05:47 PM
The number of associations that need to be made before something is understood is what I call and intuitive leap, the more associations the further the leap, considering the manner in which an AI discovers which associations are valid from a potentially astronomical number of invalid ones by trial and error (the core mechanism of deduction is guessing, someone who is good at deduction makes educated guesses) the more processing power an AI has the faster it should be able to learn and the larger the intuitive leaps it'll be able to make.

By running a simulation (in this case the simulation of intelligence) in slower than real time the amount of processing power you can give to it increases, assuming you have enough RAM to keep track of everything (and RAM is cheap) so it seems to me that the first super-intelligent AIs will be used to make large intuitive leaps, to see patterns where we do not, although I guess statistical analysis software already does that pretty well anyway.

Cognisant
7th-May-2014, 05:57 PM
Then again statistical analysis software is only good if you already know what you're looking for, a meteorological supercomputer can tell you what conditions are likely to cause a storm but it doesn't have the faintest clue why, a general AI on the other hand could deduce that insofar as it has enough data and processing power to work with, or at very least it could come up with some likely theories (but you need to teach it to communicate).

This is especially interesting regarding molecular biology, atoms are perfect forms so it seems entirely possible that an AI could "watch" and so think about molecular interactions in a dimensionless way that is totally alien to us, making intuitive leaps that are we're simply incapable of.

Molecular technology how exciting! :D
The distinction between biology and machinery will entirely cease to exist as the sophistication of our technology rivals and eventually exceeds that of biology.