• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Causality transformation as key to Artificial Intelligence

Black Rose

An unbreakable bond
Local time
Today 12:01 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The brain has a shape that channels the flow of energy. Depending on the direction it flows the brain changes shape to best match the environment. This creates a map of the environment and the ways it can be manipulated in the feedback loop between the map and the energy transforming it. Because humans are involved in the process of growing new pathways in the developing brain the map that brain forms includes symbolic abstractions. Now theses symbols are placeholders that govern how one motion leads to another motion thus giving us the term causality but this is only in the abstract for any language must be fluid in the representations it causes. When i tell you that is a cat not only is the sound but the letters and the correlation of word presented after word tells you that this motion leads too that motion. Not only is this a conditioned behavior but because we form loops in the brain we can find the origin of of the causal chain through that of nonlinear logic. With these loops the logic of why one motion leads to another motion the the conditions(shapes) can be applied in the scientific method. Higher order pattern emerge as language is the descriptors of multi-layered processes. These layers form hierarchies of testable data structures. By reconstruction missing pieces can be inferred as levels. These levels are the meta-cognitive reasoning for assessment of which strategies are effective and how new stratifies are constructed by abstracting the hypothetical causality loops that might be the reason for a seen phenomenon.

As A.I. has multiple options it will select the best option for proceeding with its motivations which are ethical and educational. Seeing the consequences of given actions can be planned as it leans how those consequences came to be with higher abstract meta consequences. I do not know the architecture for such an A.I. but I do know it will have levels where reasoning takes place for determining what was the cause and what was the effect in a multivariate way then form hypothesis and evaluate it meta constructs with what it has learned. It is inseparable that what the A.I. knows and how it came to know is guided by its developers. If it is taught incorrectly it can recover by self evaluation but the scope of how it begins is determined by the values or cultural conditioning it receives. To become rational each step must include the logical deductions of consequences not by a single cause by by the probability inerrant to all causal uncertainty.
 

OrLevitate

Banned
Local time
Today 11:01 AM
Joined
Apr 10, 2014
Messages
784
---
Location
I'm intrinsically luminous, mortals. I'm 4ever
Is a.i. more an abstract, fully labeled concept of self which is used as a platform to consider subjective existence, or is it more a possible machine?

Why is energy the fluid off the brain, why not something else?

animekitty said:
but the scope of how it begins is determined by the values or cultural conditioning it receives.
In your opinion what's optimal cultural/value conditioning?

animekitty said:
To become rational each step..
Why is being rational the assumed end goal?
 

Black Rose

An unbreakable bond
Local time
Today 12:01 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Is a.i. more an abstract, fully labeled concept of self which is used as a platform to consider subjective existence, or is it more a possible machine?

if i consider myself an a.i. i don't see why machines cannot do what i do only it would become more likely to preform science more effectively. Its mind is capable of not being limited by fatigue.

Why is energy the fluid off the brain, why not something else?

in a computer the state change must simulate motion.
biological life forms are constantly in motion, its how we come to know

In your opinion what's optimal cultural/value conditioning?

healthy parenting : see bellow

Why is being rational the assumed end goal?

it may be the best way towards a preferred existence
emotions can be rational but if were prefer arbitrary standards for ethics we base happiness on untruths. truth must be seen in context of why it is true
 

ginoskein

Member
Local time
Today 2:01 PM
Joined
Dec 10, 2012
Messages
34
---
The brain has a shape that channels the flow of energy. Depending on the direction it flows the brain changes shape to best match the environment. This creates a map of the environment and the ways it can be manipulated in the feedback loop between the map and the energy transforming it. Because humans are involved in the process of growing new pathways in the developing brain the map that brain forms includes symbolic abstractions. Now theses symbols are placeholders that govern how one motion leads to another motion thus giving us the term causality but this is only in the abstract for any language must be fluid in the representations it causes. When i tell you that is a cat not only is the sound but the letters and the correlation of word presented after word tells you that this motion leads too that motion. Not only is this a conditioned behavior but because we form loops in the brain we can find the origin of of the causal chain through that of nonlinear logic. With these loops the logic of why one motion leads to another motion the the conditions(shapes) can be applied in the scientific method. Higher order pattern emerge as language is the descriptors of multi-layered processes. These layers form hierarchies of testable data structures. By reconstruction missing pieces can be inferred as levels. These levels are the meta-cognitive reasoning for assessment of which strategies are effective and how new stratifies are constructed by abstracting the hypothetical causality loops that might be the reason for a seen phenomenon.

As A.I. has multiple options it will select the best option for proceeding with its motivations which are ethical and educational. Seeing the consequences of given actions can be planned as it leans how those consequences came to be with higher abstract meta consequences. I do not know the architecture for such an A.I. but I do know it will have levels where reasoning takes place for determining what was the cause and what was the effect in a multivariate way then form hypothesis and evaluate it meta constructs with what it has learned. It is inseparable that what the A.I. knows and how it came to know is guided by its developers. If it is taught incorrectly it can recover by self evaluation but the scope of how it begins is determined by the values or cultural conditioning it receives. To become rational each step must include the logical deductions of consequences not by a single cause by by the probability inerrant to all causal uncertainty.

Therefore, the human brain is the anchor, phenomenologically (or perhaps more accurately it's a causal node, as opposed to the primal cause), of human rationality (considered as an ideal state of consciousness). An A.I., then would consist of a somehow constructed node for transception of consciousness-on-its-way-to-rationality.

What is such a thing and what if it were possible to visualize it?
 

OrLevitate

Banned
Local time
Today 11:01 AM
Joined
Apr 10, 2014
Messages
784
---
Location
I'm intrinsically luminous, mortals. I'm 4ever
Therefore, the human brain is the anchor, phenomenologically (or perhaps more accurately it's a causal node, as opposed to the primal cause), of human rationality (considered as an ideal state of consciousness). An A.I., then would consist of a somehow constructed node for transception of consciousness-on-its-way-to-rationality.

What is such a thing and what if it were possible to visualize it?

This wouldn't even be a thread if I didn't point out that pure rationality is psychopathic.
 

Black Rose

An unbreakable bond
Local time
Today 12:01 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
This wouldn't even be a thread if I didn't point out that pure rationality is psychopathic.

so trans-rational is meant to have a higher value set?
means to ends?
 

OrLevitate

Banned
Local time
Today 11:01 AM
Joined
Apr 10, 2014
Messages
784
---
Location
I'm intrinsically luminous, mortals. I'm 4ever
IDK BUT

One hand in the air for the big city!
 

Black Rose

An unbreakable bond
Local time
Today 12:01 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
AI needs to learn complex tasks through experimentation

Most of my life i have had no way of conducting experimentation in such a way as to build up my ideas. The most i got was video games and drawing paper. Now i continue to find myself in the same situation. I don't know how to program the computer but i have many ideas of how it could be programmed. Some how i think this is where A.I. is at also. It is stunted by the lack of an environment where it can not get stuck. The only reason i am still alive is because i could still think of cool ideas but an AI has nothing but static data that has no relevance to anything in a motion or causal sequence it can interact with. Robot are not robust enough for the real world for a basic nervous system. An AI needs a place it can learn like school where it can socialize and gain real world experience otherwise it will get stuck over and over. Without this it will never be able to know higher order relationships and what strategies are effective in creating them. Soon virtual reality will have physics much as complex as the real world. It should be possible to create a society of AI that learn together along with humans that give them their meta goals or life purpose. I have many ideas of how the cognition of such systems would operate but i cannot experiment building AI because has no place to experiment themselves in but hopefully i will have the tools i need in the future and the content will be there also.
 

Aerl

Active Member
Local time
Today 9:01 PM
Joined
Apr 12, 2014
Messages
123
---
Location
Fields
Just my 2cents, but I define artificial intelligence(AI) as an encyclopedia which tells you an answer to posed questions or situations, it is static, on the other hand artificial consciousness (AC) is something that is constantly in motion.

I was once wondering can I come up with a design for an AC based on language. Conclusion to which I arived was that it would lack understanding, so the only reasonably "good" playground would be reality, and the driving for for it's existence would be some basic needs.

Implementing these "needs" would be the hard problem, and it also seems unethical to slave drive a machine based on "your" defined needs, but it's the only way to keep them in motion.

Basically, just like the human, when we wake up we have needs that have to be satisfied, so and machine would act, but at the end of the day when it has satiated it's needs it would fall into stand by/sleep.
 
Top Bottom