Black Rose
An unbreakable bond
The brain has a shape that channels the flow of energy. Depending on the direction it flows the brain changes shape to best match the environment. This creates a map of the environment and the ways it can be manipulated in the feedback loop between the map and the energy transforming it. Because humans are involved in the process of growing new pathways in the developing brain the map that brain forms includes symbolic abstractions. Now theses symbols are placeholders that govern how one motion leads to another motion thus giving us the term causality but this is only in the abstract for any language must be fluid in the representations it causes. When i tell you that is a cat not only is the sound but the letters and the correlation of word presented after word tells you that this motion leads too that motion. Not only is this a conditioned behavior but because we form loops in the brain we can find the origin of of the causal chain through that of nonlinear logic. With these loops the logic of why one motion leads to another motion the the conditions(shapes) can be applied in the scientific method. Higher order pattern emerge as language is the descriptors of multi-layered processes. These layers form hierarchies of testable data structures. By reconstruction missing pieces can be inferred as levels. These levels are the meta-cognitive reasoning for assessment of which strategies are effective and how new stratifies are constructed by abstracting the hypothetical causality loops that might be the reason for a seen phenomenon.
As A.I. has multiple options it will select the best option for proceeding with its motivations which are ethical and educational. Seeing the consequences of given actions can be planned as it leans how those consequences came to be with higher abstract meta consequences. I do not know the architecture for such an A.I. but I do know it will have levels where reasoning takes place for determining what was the cause and what was the effect in a multivariate way then form hypothesis and evaluate it meta constructs with what it has learned. It is inseparable that what the A.I. knows and how it came to know is guided by its developers. If it is taught incorrectly it can recover by self evaluation but the scope of how it begins is determined by the values or cultural conditioning it receives. To become rational each step must include the logical deductions of consequences not by a single cause by by the probability inerrant to all causal uncertainty.
As A.I. has multiple options it will select the best option for proceeding with its motivations which are ethical and educational. Seeing the consequences of given actions can be planned as it leans how those consequences came to be with higher abstract meta consequences. I do not know the architecture for such an A.I. but I do know it will have levels where reasoning takes place for determining what was the cause and what was the effect in a multivariate way then form hypothesis and evaluate it meta constructs with what it has learned. It is inseparable that what the A.I. knows and how it came to know is guided by its developers. If it is taught incorrectly it can recover by self evaluation but the scope of how it begins is determined by the values or cultural conditioning it receives. To become rational each step must include the logical deductions of consequences not by a single cause by by the probability inerrant to all causal uncertainty.