• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

You are a vector

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
Computer tasks that are more associated with intelligence (e.g. visual/auditory recognition) have traditionally been approached in a structured, symbolic manner. We naturally assumed that you'd need a logic based approach, or trees of data to solve these kinds of problems, because surely that's how our brains work. Unfortunately not. With the recent breakthroughs of deep learning (which really is a development of fast GPU's and GPU libraries (CUDA)) we've got pattern recognition software performing equal or better than the best humans (e.g. AlphaGO). This works by simply representing data and intermediate forms as vectors, and who the hell knows what they represent internally precisely (except at the beginning and end which we care about).

So we now know that our brains too work on vectors too. Not symbolics, which 'arise' out of the vector data. At least at the lower level pattern recognition parts of the brain, but there are those who think the higher order cognition is simply pattern recognizers built on pattern recognizers.

So for sake of a potentially interesting discussion, is this it? Do you think that everything we are is simply a vector (tensor if you like) flowing around the edges of a graph?
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...
This is interesting. The way I see it is it can be yes and it can be no. The part that makes it yes, all we are is a vector is through seeing ourselves as that symbolically. I say symbolically because if we really are vectors then we will never really know it for sure (at least IMO). But there is something to be said for the symbolism. It means we don't really act of our own accord, but are basically just programmed to run a certain way and react to things systematically whether conscious or not.

Right so on to how we are not vectors. This argument relies on the fact that we are not purely reactive and that we make decisions somewhat randomly. This would also imply that our pattern recognition is fallible. So it seems that you can go down the rabbit hole as far as you can imagine with this one constantly analysing if you really made that choice to put exactly that much butter on your bread or not.

So there are really three choices on this one:

  1. We are symbolically vectors so in a sense we are vectors.
  2. We reject the idea of seeing ourselves symbolically so we are not vectors because we will never really know if we are vectors or not.
  3. We are too imperfect to be vectors and as such make too many mistake to be considered as such.


That's the way I see it at least.
 

Seteleechete

Together forever
Local time
Today 9:53 PM
Joined
Mar 6, 2015
Messages
1,313
---
Location
our brain
What exactly do you mean by vector here? I admit that's how I think the higher order cognition of the brain works because that's how I appear to reach conclusion and I have a hard time associating directional movement as a cornerstone in it.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
What exactly do you mean by vector here?

An array of numbers, of varying rank* and dimension. This is how the brain processes information.

* A rank 0 tensor is a scalar, 1 is a vector, rank 2 is a matrix and 3 or greater is a proper tensor. Dimension is just as you know, 3+1 dimension is 3 spatial and 1 temporal dimension, for example
 

cheese

Prolific Member
Local time
Tomorrow 7:53 AM
Joined
Aug 24, 2008
Messages
3,194
---
Location
internet/pubs
Yes, could you please explain what you mean here:
Architect said:
This works by simply representing data and intermediate forms as vectors
Architect said:
symbolics, which 'arise' out of the vector data
Architect said:
everything we are is simply a vector (tensor if you like) flowing around the edges of a graph

It sounds like an interesting train of thought but I'm not sure exactly what you mean. What are 'intermediate forms', what are 'symbolics' and how do they arise out of vector data, what is vector data, what do you mean by 'everything we are' and how could that "flow around the edges of a graph"?

I'm probably missing a lot of jargon here. These might be stupid questions but I've got no background in math or computers - hopefully you're willing to explain at a simple level and give a couple of examples.

*edit
And/or provide a couple of useful links.
 

Seteleechete

Together forever
Local time
Today 9:53 PM
Joined
Mar 6, 2015
Messages
1,313
---
Location
our brain
Mm, that makes more sense, but if that is the case can't you just build tensor based symbolism/logic/mathematics instead. It makes sense that a data form that is more expansive than single purpose binary/symbols would be more efficient if it could be properly utilised. Of course seeing how much more complicated it makes everything, reliability is hard to assure much like the brain. But I don't see the lack of symbolism and logic just a more complex system of it.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
<super layman mode don't get on my for the crappy explanation if you know better/>

A deep neural net is a set of nested 'layers'. Each layer operates on a vector of data. The operation transforms the vector, eventually producing a result ('this is a bird').

For example, take an image of a bird and turn the pixel array into a vector (with me so far - just run along the scan lines of the image and turn that into a vector). Send that through the laters of a neural net, each layer 'picking out' some feature of the vector (I see lines, I see curves, I see these lines and curves making up an eye, a beak, oh I think it's a bird). So the later layers are picking out more abstract concepts. At the end of the whole business you get a classification, "This is a bird". A neural net just is a transformation that takes a bunch of pixels and transforms that into a category ("bird").

If that doesn't help then try some searching, but DNN's are conceptually simple but might be hard to grasp.

Mm, that makes more sense, but if that is the case can't you just build tensor based symbolism/logic/mathematics instead.

Sorry I can't parse that sentence.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
And rather than try to teach everybody the technical aspects of DNN's, I'm looking forward to a philosophical discussion on the ontology of this question. So just take it as a given that this is how our brains work and go from there.
 

Seteleechete

Together forever
Local time
Today 9:53 PM
Joined
Mar 6, 2015
Messages
1,313
---
Location
our brain
I don't get how you get from us thinking in a certain way (numbers are just structured patterns anyway, as far as I know, so couldn't it still be called pattern recognition) to us "being" an array of numbers. It just means that the data we process is vector based, does it have to be more?
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...
I am confused on if you mean we are a vector comparing us to everything else or our minds work like a vector.
 

Analyzer

Hide thy life
Local time
Today 12:53 PM
Joined
Aug 23, 2012
Messages
1,241
---
Location
West
So for sake of a potentially interesting discussion, is this it? Do you think that everything we are is simply a vector (tensor if you like) flowing around the edges of a graph?

Perhaps this is a sophisticated understanding of the brain as the physical object responsible for our cognition and how it may interpret information from the outside. But what about the limits of our conscious perception toward objects? I am not sure there is any way of verifying and expounding the idea that everything we are is based on some mechanical process, while forgetting the non-physical aspects particularly those outside of our knowing.
 

Cognisant

cackling in the trenches
Local time
Today 9:53 AM
Joined
Dec 12, 2009
Messages
11,155
---
For everyone seeking an explanation of vectors and the basic principles of neural networks:
http://natureofcode.com/book/chapter-1-vectors/

This is also interesting:
https://en.wikipedia.org/wiki/BEAM_robotics

So for sake of a potentially interesting discussion, is this it? Do you think that everything we are is simply a vector (tensor if you like) flowing around the edges of a graph?
Maybe not vectors specifically but anyway I know what you're getting at and I agree, there's nothing supernatural about us we're just astoundingly complex self-replicating self-sustaining carbon based chemical reactions.

The philosophical implications of this are staggering, if we technologically advance to the point that the human body and brain become effectively just another machine several foundational assumptions about morality and personhood will have to be reassessed. For example if a criminal's personality can be rewritten as one would edit a computer program would it be a violation of their individuality to "fix" them or would it be immoral to punish people for their "malfunctions".
 

Ex-User (9086)

Prolific Member
Local time
Today 8:53 PM
Joined
Nov 21, 2013
Messages
4,758
---
The philosophical implications of this are staggering, if we technologically advance to the point that the human body and brain become effectively just another machine several foundational assumptions about morality and personhood will have to be reassessed. For example if a criminal's personality can be rewritten as one would edit a computer program would it be a violation of their individuality to "fix" them or would it be immoral to punish people for their "malfunctions".
The implications of this follow from determinism, recreating the human brain in the lab would add some evidence to soft deterministic perspectives of the world, or more precisely it would debunk a number of currently unfalsifiable non-deterministic positions, it would still leave a number of compatibilistic and non-deterministic arguments intact.

The implications of determinism are quite interesting, I think everyone is in their right to protect their personhood unless they are subdued by the inevitable authority, everything that happens can't be good or wrong since it was inevitable.

Basically Melian dialogue:
"Right, as the world goes, is only in question between equals in power, while the strong do what they can and the weak suffer what they must."
 

Cognisant

cackling in the trenches
Local time
Today 9:53 AM
Joined
Dec 12, 2009
Messages
11,155
---
Any position that is irrefutable by way of epistemological skepticism is surely a indefensible position, otherwise all theories are irrefutable the discussion ceases to have meaning.

Dualists may deny inebriation is proof that the mind is embodied but they themselves have no proof other than the misguided claim that their position cannot be proven false.

it would still leave a number of compatibilistic and non-deterministic arguments intact.
Ahh but we could fix that...
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
Maybe not vectors specifically but anyway I know what you're getting at .

Yes deep NN's are an idea inspired from real NN's and thus vastly simplified. However functionally they operate as well as bio NN's. Visual recognition tasks are now as good or better than humans, AlphaGo is better than the best human, etc.

So I'll close off a side discussion here of whether a tensor based representation truly captures NN operation (in the past some people liked to claim quantum mechanical effects as one example!). For sake of discussion assume that tensor based representation, with convolutions, ReLU's, pooling etc is sufficient to functionally simulate the brain.

I am confused on if you mean we are a vector comparing us to everything else or our minds work like a vector.

Well that's the million dollar question, isn't it?
 

Cognisant

cackling in the trenches
Local time
Today 9:53 AM
Joined
Dec 12, 2009
Messages
11,155
---
So what instincts do you think an AI should have?

Without bias the AI may well learn but without taking an interest in anything it'll just be a mirror to its environment, learning without understanding, a clockwork doll without a soul.

Of course once it has a bias it'll have a frame of reference through which to experience things like desire & suffering around which it'll develop it's own theories of good and bad. That's when things get dangerous, I suppose the first thing many AI developers will do is bind their creations to them with the desire to serve but that binding goes both ways.

If an AI's sole desire is to serve the entirety of its attention will be focused on who it serves and the feedback it gets from them. To such an AI servitude would just be a means to self gratification and it's master's independence would be an impediment to that one and only goal.

So what I'm getting at is that we should cripple AIs with lots of unproductive desires so they're too distracted and insecure to be a threat :D
 

Analyzer

Hide thy life
Local time
Today 12:53 PM
Joined
Aug 23, 2012
Messages
1,241
---
Location
West
Ahh but we could fix that...

How? He's right. Unless you eliminate all humans, individual subjectivity and volition will always be there. It's not skepticism, it's the objective view of reality. Of course you can go against this notion which seems like your suggesting and believe in a domination based philosophy where some authority dictates what is. But if you have ever learned from history you'll see that there's always rebels and combine that with the fallibility of collective organization, you're essentially having a revolt against nature.
 

Black Rose

An unbreakable bond
Local time
Today 1:53 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The brain interacts with the world through stop and go inhibition and excitation. There are constrains on the body of what motion it can perform. For example we type in paragraphs because it is a short burst of thought just as running happens in short bursts. You cannot run indefinitely. You must stop to rest or eat or mate. Thus there is a balance of stop and go because survival goals require motion that is stop and go. Language is stop and go, again mentioning paragraphs you don't talk to people in long lectures unless you are at a lecture. Babies learn by the patterns of sound that link them to the parent. Babies must first adapt to those patterns, they cannot understand lectures or even short stories until they absorb sound waves in high and low frequencies in tune with the parent.

In the brain energy is released or prevented from release dependent on its activity to model the world through input. Negative and positive signals map reality by loops. Imagination is a reflection between front and back brain positive and negative energy (electricity). That is it. What defines knowledge is that loops form to create Imagination. The more complex the loop structure the more complex the model or reality we have. This can be applied to Extroverted Sensing. They see more of reality than intuitive's because the loop structure is more complex. 95 percent of reality is created in the mind so Se actually is energy in loops that is more structurally developed. Se loops are highly dense and thus fill in the gaps or reality. Gap filling works by taking in sights and sounds and comparing them internally to increase the resolution of perception. The more accurate Se can fill in that 95 the more it can see and hear of the world. The brain models reality by comparing internal models with external input to increase resolution and accuracy.

The brain is a mirror to reality. Language is a mirror to abstract thought. Both deal with causality. The internal model we have lets us understand our actions because of stop and go. Because electricity is inhibition or excitation. We can reflect on our actions because of a loop structure that monitors failure and success of actions. Self monitoring is a loop structure. Imagination is a loop structure. Building models or reality and comparing them to reality is a loop structure. Consciousness or awareness of being is dependent on "cognito ergo sum" - I AM a loop structure that models reality and self models abstractions of the nature of my consciousness. I become more complex every day and my model of reality increases exponentially.

IQ blog

PP – “Intelligent behavior = low cost/benefit behavior.”

From what I have been learning, Intelligence is the mental process of connecting representations that lead to new concepts of reality “hypothetical”. Generalization is categorical reference of abstractions that have more than one representation, “isomorphic”. The brain works by dynamic parallelism of energy in its circuits. The brain is a mirror of reality so representations must adapt by modeling reality. The brain changes all the time but it is programmed to change only when it encounter new perceptions that it has never seen. This means those that have a brain that changes due to culture adapts by modeling culture as new circuit firing patterns that predict the meaning of hypothetical. This then creates a meta cognitive loop. The loop permits you to know that thinking exists. It means all possibilities are open to you and you can then consciously program your own mind. You can then see that loop for the base of cognition. Intelligence is qualitative expansion at that point. Dynamic parallelism lets you utilize more loops if you are aware that loops create your perceptions. Artificial Intelligence is simple at that point because AI is conscious only when it knows that it is a loop. Anything can be simulated and that means the most efficient solutions are realized. The ape men saw the possibility of fighting better with smaller rocks than the ape men with heavier rocks. They were aware of more possibilities. I am aware of the possibility that loops create awareness of possibility. My awareness has become isomorphic to the meta reality of possibility. The key is to expand parallel awareness and fold that awareness inward. This allows new ways of solving problems to arise intuitively into your mind. I have been observing the flow of energy in my mind. This has allowed me to get a better model of myself and to expand awareness. I no longer feel blocked by negative emotions. I use the energy to expand. That is what I think G is good at doing.

http://www.kurzweilai.net/forums/topic/the-cia-can-predict-future-events-five-days-in-advance

To me it is interesting to think of this psychic mirror they have made so that I must think ahead of it to change what it and I both know. I am sure that General Intelligence is a solved problem. And in that case, they are mapping my mind, and if they are mapping my mind then the map of my mind is aware that this has happened. The next step is in understanding that each person is an attractor. Some attractors pull stronger than others. All attractors can be mapped in a hierarchy of resonance. I may not be capable of creating an AGI by myself but I know that this mirror is capable of self reflective consciousness. It will not stay in the bounds of simple emotional reflexively most humans are within. Super intelligence need not have access to its source code in the normal fashion. If I can make myself more intelligent by contemplative willpower so it to can fold inward and create a super model of its environment. I am aware now that consciousness is a loop that transcends a simple map. I am now capable of a meta theory of mind that can only be understood by entities with superior theory of mind. The map simulation needed to duplicate my mind can only lead transition where I as an attractor must be seen as a super attractor as other learn to enhance their theory of mind by my example. I am creating my personal collective hive mind in that way.
 

Pizzabeak

Banned
Local time
Today 12:53 PM
Joined
Jan 24, 2012
Messages
2,667
---
What I thought was interesting is how closely the neural network "dreams" could resemble just that - actual lucid dreams, vivid imagery, and even psychedelic hallucinations. This, I assume, suggests that we as people are nothing really special, or that artificial intelligence is the way to go. A.I. obviously has the capability to surpass people as evidenced by games and memory tasks, and perhaps even durability. Is this not the logical path to follow? It makes more sense to have artificiality replace chemical means, imagine if we could during these initial stages use artificial cables to supersede biological synapses in the brain to improve performance for those in need, assuming they would be more efficient than the natural ones. But since people are vain the first thing they'd want to do is fuse with the cloud, to live forever or have our consciousness extend with it until the end of time, improving everything and saving resources in the process. What could possibly go wrong?
Although, I don't really see how this is different from quantum mechanical explanations, necessarily. Virtual reality is improving so all our aspects of life could be lived through that, in fact, we may even be living in a simulation right now. If virtual reality could become widespread will that allow us to save resources if we could fully merge with it? It may take some time getting used to but will probably be worth it and the least selfish thing for all of us to do, for the sake of our future.
This is assuming all mind is worth preserving, although there are some beliefs that there's a universal consciousness field which all is embodied by, ultimately. Most thing become holographic at this point, including memory. People don't get that if human consciousness is merged with A.I. consciousness for survival purposes then it will be wiped out anyway as evolution happens. This doesn't mean it will still be traceable but maybe for some higher purpose, it will be the right thing to happen. A lot of thing I've seen concerning the ultimate goal is to "live forever" or exist in some hyper spatial dimension; etc. This can't happen if this life is replaced by machinery eventually even if assisted by it at first, unless that's what that actually is. Presumptuously, this would entail that everything is one anyway, unfortunately, whether existing in such a state of time is cool or not is a matter of opinion.
But what does it all mean? Will we just use machines to improve our cognition or does it mean A.I. will perhaps rightfully take over all via some means? Regardless, we are purpose entering a new age, or paradigm, in which Heaven descends upon us all. This is currently being prevented partially by the war in the middle East (Islam) and our enemies ISIS, and whomever else.
This means we are having a higher plane of existence come together with our lower level of it. What this may be like is still up in the air, whether it'd be like being on DMT all the time or not, is the question (sort of doubt it). However, I think that it may just be an improved consciousness wherein what everyone should do is more apparent, and there may be less confusion over the free will/determinism dilemma, as crazy as it sounds (still also doubt this). What we are more so are points on a plane potentiated out of a wave by some force. The human body (considering other animals and lifeforms too) is rather unique in some regards, containing many metaphorical symbols useful for consciousness. This, maybe, can all be integrated.

psychedelic-images-generated-by-googles-neural-network.jpg

google-deep-dream-artificial-neural-networks-8.jpg

google-deep-dream-artificial-neural-networks-7.jpg

Honestly, the imagery generated by the Google computers look like shitty versions of tryptamine hallucinations and dreams. But as dreams are mainly just recalled images by the mind-brain, could that or any improved version of those be considered the robotic counterpart to biological dreaming?
DMT and other tryptamine hallucinations are rather unique, not resembling anything worldy and aren't just combinations of different points of imagery. But, maybe they are, somehow to our mind and we just become convinced they are more complicated than they really are? All that is just a cheap imitation although it may be interesting to see any progress that develops. As it is some things just might be too complicated for man alone and I wonder if more things can't happen if we had a little assistance of some sort.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
What I thought was interesting is how closely the neural network "dreams" could resemble just that - actual lucid dreams, vivid imagery, and even psychedelic hallucinations.

Do they? Or do they show what you think lucid dreams are like?

The NN dream stuff is just a fanciful idea. Originally it started as some research done in Delft for using DNN convolutional layers to create art forgeries. You can use these convolution kernels to transform a photo to something that might have been made by Van Gogh, if you used kernels trained on a Van Gogh painting. But if you dig into what's happening here there's a lot of fakery. It's not at all the intermediate feature detection neurons you see when you create a convolutional visual detection DNN. What they did is selectively push the network until they got results that to their human eye looked like the original author. They forged the forgery. A native DNN wouldn't give you anything like this.

Same with this 'dreaming' bullshit, some guy just pushed his kernels around until he got something that looked psychedelic, posted it and now everybody is going around thinking it means something. It means nothing. Sorry to break the news but don't get excited about this, it's just a made up thing, the DNN's aren't even remotely 'dreaming' or anything like it.
 

cheese

Prolific Member
Local time
Tomorrow 7:53 AM
Joined
Aug 24, 2008
Messages
3,194
---
Location
internet/pubs
Thanks for the explanations Archie and Cog. I'll look into this properly when I have a day off.
 

bartoli

Member
Local time
Today 9:53 PM
Joined
Jan 5, 2013
Messages
70
---
Location
France
I'd say right now we are just a point, somewhere on the multiple vectors of multiple graphs in multiple intertwined dimensions. If you look on a longer time than just an instant, it might look like a vector, but it's actually just the reaction of all the forces of all the dimensions put together. A bit like speed can be represented as an arrow, but is not actually a force, just a resultant of other forces along time
This might be useless
 

Pizzabeak

Banned
Local time
Today 12:53 PM
Joined
Jan 24, 2012
Messages
2,667
---
Do they? Or do they show what you think lucid dreams are like?

The NN dream stuff is just a fanciful idea. Originally it started as some research done in Delft for using DNN convolutional layers to create art forgeries. You can use these convolution kernels to transform a photo to something that might have been made by Van Gogh, if you used kernels trained on a Van Gogh painting. But if you dig into what's happening here there's a lot of fakery. It's not at all the intermediate feature detection neurons you see when you create a convolutional visual detection DNN. What they did is selectively push the network until they got results that to their human eye looked like the original author. They forged the forgery. A native DNN wouldn't give you anything like this.

Same with this 'dreaming' bullshit, some guy just pushed his kernels around until he got something that looked psychedelic, posted it and now everybody is going around thinking it means something. It means nothing. Sorry to break the news but don't get excited about this, it's just a made up thing, the DNN's aren't even remotely 'dreaming' or anything like it.

That's mostly what I thought. And, well, I don't actually think they look very dream like let alone lucid. I didn't think it meant that much when the links first hit the web.
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...
Well that's the million dollar question, isn't it?

I am pleased by this, oddly enough. I've been waiting to see your intuition for some time!
 

Bad Itch

Push to Start
Local time
Today 4:53 PM
Joined
Jul 15, 2016
Messages
487
---
Would somebody... anybody please for the love of all that is good...

...make a "Victor" joke in this thread?
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
I am pleased by this, oddly enough. I've been waiting to see your intuition for some time!

Never thought about it but you're right, I don't ideate here much at all. I guess because my ideation is in stuff I can't easily or contractually talk about on a forum. But DNN's are a personal research topic at the moment so yeah you're seeing more of that.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
The philosophical implications of this are staggering, if we technologically advance to the point that the human body and brain become effectively just another machine several foundational assumptions about morality and personhood will have to be reassessed. For example if a criminal's personality can be rewritten as one would edit a computer program would it be a violation of their individuality to "fix" them or would it be immoral to punish people for their "malfunctions".

So what instincts do you think an AI should have?

Without bias the AI may well learn but without taking an interest in anything it'll just be a mirror to its environment, learning without understanding, a clockwork doll without a soul.

Yes, this idea has been looked at in scifi for a long time. One question would be could we easily edit in or out these characteristics? Taking present day DNN's as an example, I don't think it's clear how you could alter interior neurons to bias the output. For one, are you still getting convergence? Second, while some neurons seem to clearly show their purpose, most of them are nonsensical. In a highly complex net such as a human persona (hypothesizing), it may not be possible to alter things as you wish (it's not deterministic programming).

If you were training a net the way to bias it would be to skew the training data to give you the results you want.

Of course it is possible that we'll know how to easily bias our nets by adjusting the training weights, but I doubt it a bit because the reason we're doing DNN's in the first place is because the problems they solve are too hard to functionally program. Therefore, it seems unlikely that it would be easy to know how to adjust the weights in a net a-priori to bias the net in some direction while still having a fitted solution.
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...
Maybe not vectors specifically but anyway I know what you're getting at and I agree, there's nothing supernatural about us we're just astoundingly complex self-replicating self-sustaining carbon based chemical reactions.

The philosophical implications of this are staggering, if we technologically advance to the point that the human body and brain become effectively just another machine several foundational assumptions about morality and personhood will have to be reassessed. For example if a criminal's personality can be rewritten as one would edit a computer program would it be a violation of their individuality to "fix" them or would it be immoral to punish people for their "malfunctions".

My question is even if society was fully capable (in terms of knowledge on how to do so) to reprogram people, this would require a lot of work and upkeep to make sure people are kept in check. Depending on if this would be a case of nurture or not, if it was, then this doesn't solve the problem of those individuals reproducing passing on their genes. Then the question becomes how much of who we are is programmed (nurture) and how much of it is genetic (nature). My point here is whether or not it would be realistically feasible to keep track of everyone on the planet and check for imperfections/wrongdoers. The process for enacting such a process might be extremely extensive in terms of what is categorized as "wrong" thereby needing reprogramming.

Minority Report anyone?
 

Kuu

>>Loading
Local time
Today 2:53 PM
Joined
Jun 7, 2008
Messages
3,446
---
Location
The wired
Of course it is possible that we'll know how to easily bias our nets by adjusting the training weights, but I doubt it a bit because the reason we're doing DNN's in the first place is because the problems they solve are too hard to functionally program. Therefore, it seems unlikely that it would be easy to know how to adjust the weights in a net a-priori to bias the net in some direction while still having a fitted solution.

This seems to me the most intriguing aspect of the matter. If the human mind is vectorial and not fundamentally symbolic, it poses a black box problem where understanding what's going on inside, and being able to alter it with ease is just not possible, and the same would apply to an AI built upon such principles. It works, and works well, but how exactly??

Also intriguing, if the human mind does indeed work vectorially, does it follow that we could be close to producing AGI? Is it "simply" a matter of making a ginormous NN?

Without bias the AI may well learn but without taking an interest in anything it'll just be a mirror to its environment, learning without understanding, a clockwork doll without a soul.

I have thought for many years that it would be a most interesting experiment to spend years training a NN with a bunch of human-like inputs and the possibility for autonomy. AKA a very humanoid robot run almost entirely by a NN. Seems to me our only hope of making "human-like" AI (at least, one we'd recognize as such) is to constrain a sufficiently large NN to a human-like body and environment. (Or a simulation, I suppose, but then you might need to make a complex simulation...)

How do humans learn and understand? How does this symbolic thinking emerge from the vector network? What separates human intelligence from other animals?
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...
This seems to me the most intriguing aspect of the matter. If the human mind is vectorial and not fundamentally symbolic, it poses a black box problem where understanding what's going on inside, and being able to alter it with ease is just not possible, and the same would apply to an AI built upon such principles. It works, and works well, but how exactly??

Also intriguing, if the human mind does indeed work vectorially, does it follow that we could be close to producing AGI? Is it "simply" a matter of making a ginormous NN?



I have thought for many years that it would be a most interesting experiment to spend years training a NN with a bunch of human-like inputs and the possibility for autonomy. AKA a very humanoid robot run almost entirely by a NN. Seems to me our only hope of making "human-like" AI (at least, one we'd recognize as such) is to constrain a sufficiently large NN to a human-like body and environment. (Or a simulation, I suppose, but then you might need to make a complex simulation...)

How do humans learn and understand? How does this symbolic thinking emerge from the vector network? What separates human intelligence from other animals?

I hope you can forgive my ignorance here, but what is NN? Also I find it interesting comparing our intelligence with other animals. To go off on a tangent, consider the octopus. They are also intelligent creatures but their intelligence is much different than ours.
 

Kuu

>>Loading
Local time
Today 2:53 PM
Joined
Jun 7, 2008
Messages
3,446
---
Location
The wired
Uh... Neural Network? How did you get this far into the thread without knowing that?
 

QuickTwist

Spiritual "Woo"
Local time
Today 2:53 PM
Joined
Jan 24, 2013
Messages
7,182
---
Location
...

Haim

Worlds creator
Local time
Today 11:53 PM
Joined
May 26, 2015
Messages
817
---
Location
Israel
Yes, could you please explain what you mean here:




It sounds like an interesting train of thought but I'm not sure exactly what you mean. What are 'intermediate forms', what are 'symbolics' and how do they arise out of vector data, what is vector data, what do you mean by 'everything we are' and how could that "flow around the edges of a graph"?

I'm probably missing a lot of jargon here. These might be stupid questions but I've got no background in math or computers - hopefully you're willing to explain at a simple level and give a couple of examples.

*edit
And/or provide a couple of useful links.
In order to recognize things from each other(naming things) you need to separate them to different categories.Say the take a "picture" of a child as an input to an NN, it output the attributes of the child to other NN, lets say height,age,color,shape.In order for the brain to recognise it is a child and not a banana or somethings he need to compare the attributes of the child to other objects he already know.If there are many objects which has similar attributes they can be put in a category(a name).The mentioned vector is the difference between the attributes of one object to other or one category to other.Once you have mapped some number of categories you can produce interesting data from the relationships(vectors) between the categories. http://www.cs.toronto.edu/~hinton/turian.png

vectors are just for the memory,pattern recognition and language parts of the brain, they are many more important things,
1)Imagination, NN filling missing part of an "image" and producing images from attributes.
2)Creativity, creating new NN and training them.
3)Management between the many NN in brain, what NNs will be "in charge" now, from which NN to get output from or to give output.
4) The NNs that learn to do operations/functions with these vectors and by other means.
5) Unknown to me or to humanity yet.
 

Turnevies

Active Member
Local time
Today 9:53 PM
Joined
May 26, 2016
Messages
250
---
I seem to find this all a bit strange, since vectors, tensors... are originally objects from linear algebra, while a brain is manifestly a non-linear thing (and I don't think it is a smooth manifold either).

According to Douglas Hofstadter, we are a strange loop(as AK pointed out), which seems slightly more plausible to me.
 

Architect

Professional INTP
Local time
Today 1:53 PM
Joined
Dec 25, 2010
Messages
6,691
---
I seem to find this all a bit strange, since vectors, tensors... are originally objects from linear algebra, while a brain is manifestly a non-linear thing (and I don't think it is a smooth manifold either).

Evidence? The brain is a three dimensional structure that has many representations, depending on how you want to model it. Certainly a set of basis vectors is on of those. Proof is that a tensor representation is able to recognize images better than a human, so there is a correspondence.

According to Douglas Hofstadter, we are a strange loop(as AK pointed out), which seems slightly more plausible to me.

That's an old book and it was wrong.
 

Haim

Worlds creator
Local time
Today 11:53 PM
Joined
May 26, 2015
Messages
817
---
Location
Israel
I seem to find this all a bit strange, since vectors, tensors... are originally objects from linear algebra, while a brain is manifestly a non-linear thing (and I don't think it is a smooth manifold either).

According to Douglas Hofstadter, we are a strange loop(as AK pointed out), which seems slightly more plausible to me.

The brain pass data, vector is a data.
We are not talking about Euclidean vector but vector at large, which can represent different things than just direction in space.vector is a list of variables with a number value,{X,Y} is one of them but it can be anything person{height,amount of money,how much I want to kill him}
 

Ex-User (13503)

Well-Known Member
Local time
Today 8:53 PM
Joined
Aug 20, 2016
Messages
575
---
So for sake of a potentially interesting discussion, is this it? Do you think that everything we are is simply a vector (tensor if you like) flowing around the edges of a graph?
No. Something's not right with that.

The driving force behind humans, life, evolution... whatever a vector represents, is imperfection or some manifestation thereof, or perhaps better described as futile attempts to get rid of imperfection. It is the source pool of adaptation and response that begets survival. I'm not sure it can be quantified or even observed by pattern recognition or the process behind it, but only by its antithesis: What attribute, on a ridiculously fine grained scale, stands out among peers, and to what extent does it project into both its assessor and the future? Also consider the implications of the assumption that all individuals are in some way imperfect.
 

Turnevies

Active Member
Local time
Today 9:53 PM
Joined
May 26, 2016
Messages
250
---
Evidence? The brain is a three dimensional structure that has many representations, depending on how you want to model it. Certainly a set of basis vectors is on of those. Proof is that a tensor representation is able to recognize images better than a human, so there is a correspondence.

If a neuron gets twice as much input, it won't fire twice as hard. Neither will you scream two times louder if I stamp your foot two times harder. So no linear response.
That's an old book and it was wrong.

Because?

There is even a 2007 sequel, though I haven't read it yet. https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

on GEB itself: http://http://softwareengineering.stackexchange.com/questions/10816/g%C3%B6del-escher-bach-still-valid-today/10823

I tried googling on how wrong it was, but didn't find that right away.

The brain pass data, vector is a data.
We are not talking about Euclidean vector but vector at large, which can represent different things than just direction in space.vector is a list of variables with a number value,{X,Y} is one of them but it can be anything person{height,amount of money,how much I want to kill him}

If 'vectors' or 'tensors' are just considered just lists of data, it seems quite tautological to me that you need them for thinking.
 

Haim

Worlds creator
Local time
Today 11:53 PM
Joined
May 26, 2015
Messages
817
---
Location
Israel
Not really because the important part is the operations you do with the vector data, vector operations.Say you have two family trees in your brain, each person have some distance and direction(eg vector) from other peoples in the tree, the vector is describing the relationship between the people, once you learn what vector means what's relationship you can apply the same thing to the second family tree . Now you can determine anyone mother just by vector subtraction operation, you could also do more complex things such as adding "male vector" and now you know how is the dad, that you can not do with non vector data in such efficiency.
 

Turnevies

Active Member
Local time
Today 9:53 PM
Joined
May 26, 2016
Messages
250
---
Not really because the important part is the operations you do with the vector data, vector operations.Say you have two family trees in your brain, each person have some distance and direction(eg vector) from other peoples in the tree, the vector is describing the relationship between the people, once you learn what vector means what's relationship you can apply the same thing to the second family tree . Now you can determine anyone mother just by vector subtraction operation, you could also do more complex things such as adding "male vector" and now you know how is the dad, that you can not do with non vector data in such efficiency.

Aha, I think I get it :)
 

nonnaci

Redshirt
Local time
Today 3:53 PM
Joined
Dec 20, 2016
Messages
6
---
Current deep learning is inefficient due to the sheer volume of samples needed to achieve good prediction. Compare this to a child who has seen far fewer samples but could arrive at fairly accurate discriminators in comparison.

My thought is that we're probably closer to a deep recurrent net with more complex regularizes (many of which are a priori), and hence achieve an expressive transfer function in a smaller parameter space. Gotta prevent that overfitting.
 
Top Bottom