• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.

Artificial intelligence sharing thread and recent developments

sushi

Active Member
Local time
Today, 23:36
Joined
Aug 15, 2013
Messages
363
#1
I am currently doing some research on AI, do not see much applications in ai techology despite the hype.

the current applications i see most is medical care and finances , which requires alot of computing power.,

the other application is personal assistance like Jarvis in iron man and customized advertisement and gaming industry, or military technology and space exploration, where human beings can't reach.

robots will probably take over alot of jobs and manufacturing, but still the technology is not as much hype as some sources like to potray.

if ithe AI is seperated from human, then it will just be a more advanced form of computing, and networks. if it is machine and human merger, then perhaps there will be much more potential.
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#2
Nvidia made a 2 petaflop computer the size of half a refrigerator. I saw the keynotes on youtube yesterday. 15 zeros worth of computation.

I think that any A.I. development that takes place, it will take huge data sets and find the best configurations. Even A.I. that solves problems independently will benefit by practicing problem-solving millions of times within weeks. A problem generator could be made to train it.
 

sushi

Active Member
Local time
Today, 23:36
Joined
Aug 15, 2013
Messages
363
#3
my perspective is AI is just a more advanced form of computing, and how to intergrate this into the economy and new business model is very much uncertain. it is hard to know what the AI can specialize and do.

If it is not used for human augumentation, and there is not very much application to it. But I am still not well versed in the subject enough to make such a definite conclusion.

I think that any A.I. development that takes place, it will take huge data sets and find the best configurations. Even A.I. that solves problems independently will benefit by practicing problem-solving millions of times within weeks. A problem generator could be made to train it.
maybe curing human problems? but that itself is already a very board spectrum. the most narrow down basics is still in medical care and health. Furthermore, there are so many useless data sets on the internet,
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#4
One interesting application is poker AI's, see e,g: here: http://www.businessinsider.com/libratus-poker-bot-winning-at-texas-holdem-2017-1

I've spent the last year or so writing something similar, using the same techniques as the researchers. Pretty much done now, and it seems to work pretty well. I have not been able to beat it myself.

It's mostly based on reinforcement learning, but not in the classical sense where a program just searches the space of possibilities and picks the best ones. In this scenario you have an opponent which can exploit your plays, so instead of maximizing the reward you have to play close to a so-called Nash Equilibrium. This is essentially solved by having two (or more) of these algorithms trying to maximize reward against each-other. The final result is then typically close to a Nash Equilibrium.

In theory though, one wouldn't need to use any learning-algorithms for this problem at all. If you would have infinite computational power and storage space, you could solve it by using plain linear optimization. This remains infeasible however, and probably will remain so forever.
 
Local time
Today, 15:36
Joined
Jun 20, 2011
Messages
39
#5
)
I am currently doing some research on AI, do not see much applications in ai techology despite the hype.

the current applications i see most is medical care and finances , which requires alot of computing power.,

the other application is personal assistance like Jarvis in iron man and customized advertisement and gaming industry, or military technology and space exploration, where human beings can't reach.

robots will probably take over alot of jobs and manufacturing, but still the technology is not as much hype as some sources like to potray.

if ithe AI is seperated from human, then it will just be a more advanced form of computing, and networks. if it is machine and human merger, then perhaps there will be much more potential.
In all honesty mate, so much of this is just elaborate curve fitting. It isn't too easy to actually find things that try to go beyond that. Most of the standard algorithms honestly just take a hypersurface and make small (possibly large or even erratic changes to the coefficients. There are nonlinear classifiers but they're still just curve fitting. Even if it's gnarly curve fitting, which it certainly is, it really isn't more profound than that. There are some very well known attempts at general AI, or at least something that is more sophisticated than what you'll find in standard ML sources, but they are usually very closed source for obvious reasons.... not to mention you probably won't have a machine powerful enough to do it-but you could possibly rent some time on a mainframe or, if you're an academic, just go to a terminal on campus.
 

Ex-User (8886)

Well-Known Member
Local time
Today, 23:36
Joined
Sep 11, 2013
Messages
620
#6
I am amazed that you can't find any... I have a lot of my own concepts, but why should I share if you don't?

To be honest, ai will replace human on everything, with some time. Merging human and computer with interface? Not worth, computers will be better, it will be waste if their potential. Such interface can be good only to communicate results with humans. In blindsight book, there was a human, who had half brain and half computer, his job was to tell people what machines discover in understoodable way.
 

Cognisant

Condescending Bastard
Local time
Today, 11:36
Joined
Dec 12, 2009
Messages
7,780
#7
s-l300.jpg
For those who don't know it was a short lived fad, a little device that would play the game "Twenty Questions" with you and as long as you answered "yes" or "no" truthfully it could usually guess the thing you were thinking of, as long as it wasn't too specific. For example it could guess "toothpaste" but not "mint flavored toothpaste" which is still pretty impressive for a palm sized ball that only needed twenty yes/no answers to practically read your mind.
Gaming-flowchart.jpg
These work great when you've got an "expert" to create your system for you but that's not really intelligent by itself, it's just a static system that returns a response based upon the the information you give it. But there's two things worth noting about expert systems, they're incredibly fast (because what it does is incredibly simple) and expert system recognize things by breaking them down to their constituent properties.

It's possible to create an emergent expert system if you take data and subject it to statistical analysis, this is essentially what artificial neural nets (ANN) performing pattern recognition and image analysis (essentially the same thing) are doing. If you show an ANN a series of images, some containing a red wall and some that don't it's possible to train the ANN to correctly identify images containing a red wall, even if it's a new image the ANN has never seen before.

This is done by obtaining properties and sorting them by relevance based upon some inherent bias. For example when the ANN views an image there may be a script that scans the image for the color red and returns a Boolean state to the ANN, yes the image contains blue or no it doesn't. Obviously images containing a red wall are always going to have "yes" state which makes this state highly relevant to the goal of identifying images containing a red wall. But the ANN doesn't know this yet, it has to pick properties at random and test them against its bias, this testing being a sort of Darwinian process. There's a limited number of slots for properties (think musical chairs) and these properties compete for these slots with how often they coincide with a positive or negative result.

Think of it like trying to determine the stickiness of shit by throwing it at a wall, if I have some alpaca shit and it sticks more often than not then we know alpaca shit is sticky, unlike bull shit which falls off more often than not, that "more often than not" bit is very important. The shit is the property and its stickiness is the bias, if alpaca shit always sticks to red walls (assume I'm color blind) then I can use alpaca shit to identify red walls, and if bull shit always falls off red walls then I can use bull shit to identify what isn't a red wall. Dog shit is unreliable, sometimes it sticks and sometimes it doesn't, this makes it useless for identifying what is or isn't a red wall so I'm going to discard that and try using monkey shit instead.

My first attempt at explaining this didn't involve shit, it was really boring.
If you're wondering why the color changes what shit will stick to a wall, that's irrelevant just accept it as truth.

The point is the stickiness or non-stickiness of the shit determines the relevance of that property when trying to recognize whatever this bias is trying to get us to recognize, in this case a red wall. The more relevant a property is the more "fit" it is by the Darwinian metaphor whereas the less "fit" properties are discarded in favor of trying new ones. If all the properties are highly relevant the ANN might not discard any, if none of them are particularly relevant the ANN might discard most of them as it searches for other properties that are more bias-relevant.

In fact what I've just described isn't a whole neural net, more like one neuron and its synapses, the synapses are the properties and whether or not the combined "weight" of these properties exceeds the activation threshold of the neuron is then property to the next level up, the synapses of other neurons that connect to the first neuron's axon.

This emergent expert recognition system isn't just applicable to recognizing things it can also be used to recognize what to do, or what not to do, based upon desired outcomes (see murderhobo chart above) those outcomes being what you are inherently biased to seek or avoid.
 

Cognisant

Condescending Bastard
Local time
Today, 11:36
Joined
Dec 12, 2009
Messages
7,780
#8
This "throwing shit at a wall and seeing what sticks" principle also applies to behavior in the sense that when we don't know what to do we try things at random (actually I think crying is the default behavior for a combination of duress and not knowing what to do, but it takes a lot to make someone break down and cry) and when we find something that works we remember to do that next time we're in the same or similar situation.

In this way we develop a massive system of procedures, a system we spend our entire lives developing and optimizing, this isn't conscious thought but it is the ~90% of stuff you can do without thinking about it.
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#9
If I recall correctly the 13 layer of simulated neurons is where cats can be distinguished from dogs. These layered networks are now superior to humans at detecting the breed of a cat or dog. All this was done under supervised learning. All the answers were known and the network adjusted itself to give the correct answers. Unsupervised learning is the aim now. This requires working memory, intermediate and intrinsic motivation. It requires frontal lobe mechanisms.

Currently, I am reading a book called Thinking in Pictures: My Life with Autism by Temple Grandin. She says she can invent machines in her head the way you would use CAD but more interactive. She has a large connect from here frontal lobes to her occipital lobes. The brain is different from artificial layered networks because it reflects back on itself. Imagination is a back and forth process, back of the brain to the front of the brain. Working memory is back and forth between two and more areas in the front brain (plus the visual sometimes).

From everything the brain has experienced is can use that experience to reflect on itself creating new ideas, prediction, imagination, hypothesizing and being creative. The brain talks to itself and self-monitors. A.I. could do the same but it needs real experience of a world. It needs to act in the world an see the consequences of its actions. That is how the reflecting become complex enough to be at the human level of common sense.
 

Cognisant

Condescending Bastard
Local time
Today, 11:36
Joined
Dec 12, 2009
Messages
7,780
#10
Unsupervised is internally supervised, we find sugar and salt pleasurable because we have an internal bias to enjoy those sensations.

But yeah memory that's a problem, it's all well and good to adjust behaviour based on immediate feedback but how do we associate what we did yesterday with the outcome it achieved today?

Obviously there's some sort of memory prioritization going on but I don't know how it works or even how we recall memories.
 

Pizzabeak

Prolific Member
Local time
Today, 15:36
Joined
Jan 24, 2012
Messages
1,843
#11
We already have those. The real appeal of A.I. was what Virtual Reality was to the 90's. What it is, is an idea. And ideas are appealing, make you look smart, and are all around useful tools and aid. It should be exponential. Cyborg enhancements then putting consciousness in the superquantum computer are separate things. And this could be a simulation right now. So the elite A.I. overlords are already here and we are in the matrix, possibly within another matrix. They engineered it all to build themselves, which was us doing all the hard work (sounds similar to a certain tale detailing the dawn of man himself). It seems like the natural order of things, as far as progression goes.

Surely there are sectors out there working on all the problems - if not, you better get started yourself and take the initiative. People won't stop destroying the rainforests themselves even when it's too late. Then we'll only have virtual worlds and places. Life and the universe is a fractal existence. It's just utterly ridiculous, and damn near a load of nonsense. There's still a chance the A.I. could be and remain friendly, whether we input safeguards or failsafes into it. The thing was, and always had been, they could see us as a threat (see: Terminator, Skynet) to survival and wipe us out.

Even if we merge with them there'd still be some of us in there, which would be emotion or other human things, influencing it either way, so they'd rather kill us all and be rid of it, dealing with us no more. You never know what it would take. Some people just want to watch the world burn, hey, do what you need to do to get where you need to be as long as you aren't hurting others, or be dealt with, perhaps rather swiftly. There's a connection between laughing and crying, I have discovered and reported. It's all too much, to travel across the universe, cosmos, & galaxies and back. What is the measure of a man?
 

sushi

Active Member
Local time
Today, 23:36
Joined
Aug 15, 2013
Messages
363
#12
I am amazed that you can't find any... I have a lot of my own concepts, but why should I share if you don't?
.
I already said 90% of what i know so far, but then again, i haven't really researched the topic.


this video once again shows my hypothesis that AI is not really relevant in applications until it merges with humans, or have some sort of brain computer interface. the most advances i can see so far is in medical area, and genetic and human body analysis.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#13
According to people at a finance conference I went to recently, the potential number of people that will be replaced by machines over the next decades is extremely small compared to how many will use AI as a complementary tool to human intelligence.
 

Cognisant

Condescending Bastard
Local time
Today, 11:36
Joined
Dec 12, 2009
Messages
7,780
#14

 

sushi

Active Member
Local time
Today, 23:36
Joined
Aug 15, 2013
Messages
363
#15
only if we digitalize the consciousness does AI become relevant, or brain machine interface. otherwise they will just take over our jobs, and be like Jarvis or overthrow us.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#16
Finally I'm fully vindicated. I've always said that neural nets are just regression; now there is a paper saying exactly that: https://arxiv.org/pdf/1806.06850.pdf.

there is a rough correspondence between any fitted NN and a fitted ordinary parametric polynomial regression (PR) model; in essence, NNs are a form of PR. We refer to this loose correspondence here as NNAEPR, Neural Nets Are Essentially Polynomial Models.
[...]
[this] suggests that in many applications, one might simply fit a polynomial model in the first place, bypassing NNs.
There ya go, all you "AI experts" out there. Your dirty secret is being revealed.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#18
So what, is that the only method by which to attempt to create human-level A.I. (?)
By human-level AI I assume you mean Artificial General Intelligence (aka AGI) which is human level? Obviously it's easy to make superhuman AIs for specific problems, for example chess engines, but that seems to be another story.

So OK, assuming we remove neural nets from the picture, what methods do you know of that can be used to make a human-level AGI?
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#19
So OK, assuming we remove neural nets from the picture, what methods do you know of that can be used to make a human-level AGI?
I would model the white matter fiber tracts that allow the front and back of the brain to work together. The tracks would connect in the model as they do in the frontal lobes for working memory and so forth the whole brain system by which cognition is a control system with memory. White matter in the way it is connected up is the control mechanism of memory in the brain. It is why we can hold thoughts and manipulate incoming data and internal data. The parietal lobe, for instance, manipulates spacial data but it can only do so by its white mater connections in the arrangement that they laid down with other areas. The brain follows basic control theory utilizing memory as part of that. It's no different in principle from a drone learning to balance its flying patterns accept the brain is learning to control itself to a much more sophisticated level how memory is connected up.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#20
The brain follows basic control theory utilizing memory as part of that. It's no different in principle from a drone learning to balance its flying patterns accept the brain is learning to control itself to a much more sophisticated level how memory is connected up.
It's been estimated that the computational power needed to simulate a bee brain (which has about 800,000 neurons) is available in any regular consumer-market computer. If brains simply follow "basic control theory", why haven't people been able to simulate a bee brain?
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#21
It's been estimated that the computational power needed to simulate a bee brain (which has about 800,000 neurons) is available in any regular consumer-market computer. If brains simply follow "basic control theory", why haven't people been able to simulate a bee brain?
Do we have the mapping of the connections of the bee brain to put it into a simulation? The mapping of the human brain has been done in detail. And what would we do with a bee brain anyway? It is not designed for cognition the way we have it. You bring up a red herring. The Human brain project has simulated a mouse brain in a simulated mouse body that does mouse things that actually is similar to human cognition (mammalian cognition). They don't give a f*ck about bee brain red herrings. They (bee brains) have no use in intelligence research. Control theory if you don't know is dependent on feedback. Any system that has any cognition at all is a feedback system. Control theory is simply the design of the feedback system. The way the brain is wired up to use memory allows intelligence to function.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#22
@Animekitty
Yeah, according to your theory, one should map out the brain in question anatomically. My point was that a bee brain should be small enough to map it out, neuron by neuron. Yet, as mentioned, that hasn't helped for the purpose of replicating a bee's cognition. But somehow, in your mind it should be easier to map out 16 billion neurons than 800,000. Not sure how that works.
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#23
Yeah, according to your theory, one should map out the brain in question anatomically. My point was that a bee brain should be small enough to map it out, neuron by neuron. Yet, as mentioned, that hasn't helped for the purpose of replicating a bee's cognition. But somehow, in your mind it should be easier to map out 16 billion neurons than 800,000. Not sure how that works.
What the fuck do you mean by "in your mind it should be easier to map out 16 billion neurons than 800,000."

How do know that is in my mind? Why do you think you know what others positions are? You do this all the time. You strawman people. You have some kind of problem where you think you know what others are thinking but are deluded because you argue against what is in your own head what you think the person is saying, and not the actual person. You create a strawman in your head of the other person not knowing what the other person is actually thinking.

In your mind @Serac you think because no one is working on bee brains that the work on human brains is Bull Shit. That's why you bring up bee brains.

So what the fuck are you expecting? people with supercomputers are mapping the human brain. Why the fuck are you talking about bee brain? No one cares. No one gives a fuck replicating the bee brain. I don't fucking know if it is easier to replicate a bee brain or a human brain. All I fucking know is that human brains are being mapped out.

This is the fucking point: Human level Artificial General Intelligence is possible and I Fucking Proved it. You have no Grounds @Serac to say otherwise.
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#24
What the fuck do you mean by "in your mind it should be easier to map out 16 billion neurons than 800,000."

How do know that is in my mind? Why do you think you know what others positions are? You do this all the time. You strawman people. You have some kind of problem where you think you know what others are thinking but are deluded because you argue against what is in your own head what you think the person is saying, and not the actual person. You create a strawman in your head of the other person not knowing what the other person is actually thinking.
Well I don't know exactly what is in your mind, but I know what I can infer from you write here.

In case it's still going over your head, my point was: we can map the neurons of a bee brain (at least it's much easier than mapping a human brain), yet cannot replicate its cognition. The only comment you had to that was: the human brain needs to be mapped neuron by neuron to replicate human cognition. So this implies that you think that replicating human cognition is easier than replicating a bee's cognition, right? Otherwise you would first have to argue why it is impossible to replicate a mere bee's cognition by mapping its neurons.
 
Local time
Today, 16:36
Joined
Apr 4, 2010
Messages
5,681
Location
subjective
#25
Well I don't know exactly what is in your mind, but I know what I can infer from you write here.

In case it's still going over your head, my point was: we can map the neurons of a bee brain (at least it's much easier than mapping a human brain), yet cannot replicate its cognition. The only comment you had to that was: the human brain needs to be mapped neuron by neuron to replicate human cognition. So this implies that you think that replicating human cognition is easier than mapping a bee's cognition, right? Otherwise you would first have to argue why it is impossible to replicate a mere bee's cognition.
Clearly, you infer the wrong things about what people say. You are wrong we need to map every neuron in the brain. You are wrong that in that you never said the bee brain was mapped or that it as a simulation did not work. You never said a bee simulation existed. You never said the be simulation failed. You just presented bee brain mapping as a hypothetical situation not as a real event. How dishonest of you.

In brain simulations, we need not map every neuron. We need only map white matter connections between the 180 regions of the cortex. Then we can have each memory area (180) follow the control paths of the white matter in the simulation. The control paths will be the human modeled paths. Simulated white matter and the memory system the grey matter 180 areas. Then feedback can take place between the memory areas through the control paths (white matter) changing the memory of each 180 areas. The simulated brain will be in a simulated human. The reason this will work is that the bee brain simulation was designed on the static scan of neurons whereas the human brain simulation will be dynamic. The memory will have plasticity. In the brain, feedback tells the memory when to change. Learning happens when a back and for exchange happens and neurons grow new connections so as to respond appropriately. Clusters of neurons grow complex structures to respond as needed when receiving a signal. When things are not working out you need to change by growing new connections or pruning them. This is dynamic, growth and change happen in negative and positive condition throughout the whole brain. This is simulated in the 180 areas the control paths transmit signals between memory areas. This is not a static bee brain simulation.
 

sushi

Active Member
Local time
Today, 23:36
Joined
Aug 15, 2013
Messages
363
#26
it seems most applications now revolves around intelligent systems and automonous worker.
 

Cognisant

Condescending Bastard
Local time
Today, 11:36
Joined
Dec 12, 2009
Messages
7,780
#27
It's been estimated that the computational power needed to simulate a bee brain (which has about 800,000 neurons) is available in any regular consumer-market computer. If brains simply follow "basic control theory", why haven't people been able to simulate a bee brain?
We have drones that can fly and navigate autonomously and we can add some image recognition software to find the patterns bees use to find flowers so making an artificial bee is (at least from a software perspective) relatively straightforward, their behavior isn't really all that complex.

Is that an artificial bee or just clever software? What's the difference?
 
Local time
Today, 23:36
Joined
Jun 7, 2017
Messages
1,428
Location
Stockholm
#28
It's been estimated that the computational power needed to simulate a bee brain (which has about 800,000 neurons) is available in any regular consumer-market computer. If brains simply follow "basic control theory", why haven't people been able to simulate a bee brain?
We have drones that can fly and navigate autonomously and we can add some image recognition software to find the patterns bees use to find flowers so making an artificial bee is (at least from a software perspective) relatively straightforward, their behavior isn't really all that complex.

Is that an artificial bee or just clever software? What's the difference?
Well, I don't think that the complexity of a bee's behavior is really comparable to a drone's. That's besides the point though, which was that it's a mistake to think that you can go from mapping neurons to get the desired intelligence.

I actually recently read a similar point in NN Taleb's "Skin in the game". He puts in terms of how it is impossible to go from understanding single units (e.g. neurons) to understanding the system as a whole (e.g. the brain). He put it as follows:
Understanding how the subparts of the brain (say, neurons) work will never allow us to understand how the brain works. A group of neurons or genes, like a group of people, differs from the individual components – because the interactions are not necessarily linear. So far we have no f***ing idea how the brain of the worm C. elegans works, which has around three hundred neurons. C. elegans was the first living unit to have its genes sequenced. Now consider that the human brain has about one hundred billion neurons, and that going from 300 to 301 neurons, because of the curse of dimensionality, may double the complexity.
 
Top Bottom