• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Lets talk- A.I

lolzcry

burnin'
Local time
Today 1:43 PM
Joined
Sep 1, 2019
Messages
72
-->
This has intrigued me for quite a while and is a very cliched topic, but considering that we are able to accomplish the task of developing a self-learning A.I and hereby leave it to its own devices in the collective ocean of humanness i.e the internet with all information; digitized and at its disposal- is it possible for it develop its own instincs and/or morals? From what I am aware, morals are learnt unlike in the case of instincts(is it even possible to distinguish between them in something like A.I?) so I consider it a possibility that if it reads enough propaganda of this is good and how life should be lived etc etc than it might take it at face value and focus on what is 'good' based on the majority population? I mean if it is told that something is the purpose in living and these rules should be followed, than would'nt it be seen as a form of program which has to be followed? And if this is the case, than might it not be possible that A.I actually does something which is totally unrelated to its original programming? From where I stand, I see us humans as eternally irrational and emotional peices of chaos and contradiction which only exist because of said irrationality because if you think about it, the only reason anybody ever does anything is because of instincts and hormones, not logic. We eat because we get 'hungry' and since we dont want to die, we look for food. We build skycrapers, solve problems or support people because it gets us some sort of emotional 'satisfaction' or we decide something is 'wrong', so what I essentially want to understand is about the possibilty of a body of pure logic without emotions(I dont think its possible for A.I to ever 'learn' emotions) and instincts, to derive a motive to do something by itself? I see us human as architects with baseless emotions and instincts as motive, and reasoning and logic as the blueprint with which we build buidings and coming to the last thing I have a requirement to be clarified of; will aforementioned A.I develop an instinct for self-preservation? Because if so, than I dont think the rest is that far off.
 

Ex-User (14663)

Prolific Member
Local time
Today 8:13 AM
Joined
Jun 7, 2017
Messages
2,939
-->
You can have a complicated algorithm which "learns" some set of moral principles from data, yes. You can even have it optimize its decisions ethically if you specify what to optimize and with what constraints. It will remain a mere algorithm, however – without any concept of morality of its own. You can arrange a network of bread toasters to function as an artificial neural net; it's still just a bunch of bread toasters performing some pre-programnmed sequence of instructions.
 

ANAXEL

I always get invited to parties
Local time
Today 3:13 AM
Joined
May 31, 2019
Messages
17
-->
I’m provoked to think that what we think of as A.I. will forever remain an influential piece of fiction not to be imitated. Could be wrong, but by the time we develop the technology to create AI, it may not be what we imagine now. With the principles of programming we currently use, the risk of a corrupted AI would mainly involve the same rule that creates evil in humans: expansion at expense. That’s actually how they’re always portrayed even in fiction, right? Whatever is defined as the end goal grows once they process more acquired data.
For example, we teach the AI that it’s goal is to obtain 2 out of 1 and 1. It’ll forever stay at 1+1=2 without corruption, but if the learning ability is added (meaning, it not only acquires more data but integrates it into its programming) it may reinterpret what constitutes as “2” and “1” and “+” differently, always trying to follow its original programming but now with varying elements, even needing to develop new lines of logic to support any inconsistency (just like we do). That’s just a microscopic example of how it could take place. Of course, it’d cause massive damage to the human race if it’s given the actual physical tools to do so, but one of the possible more accurate examples I saw of this in fiction was from the character SCP-079 which narrated how by manipulating information on forums or social media or whatnot it managed to cause numerous teenage suicides.
Again, that's with the given tech, I’m not aware of what greater developments are taking place. I wonder if we’ll leave binary completely at some point.
 

Black Rose

An unbreakable bond
Local time
Today 2:13 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
We grow into our preferences as they become more complex. Emotions play a role as well as socialization. We make choices and see if they were agreeable to our constitution or just a mistake. We make better choices in the future that way.

Personality can be summed as likes and dislikes prioritization. Some of that is innate the rest social. But definitely it has to do with making choices.

Networks set up correctly make choices. biological or virtual.
It is a matter of growth in an environment.
 

Cognisant

Prolific Member
Local time
Yesterday 9:13 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Emotions are just decision making biases, when your leg itches you scratch it because scratching an itch feels good and when the scratching starts to hurt you stop, you don't know why your leg was itchy or what the scratching achieved (if anything) you just do it because you want to. The desire to scratch is an emotion, pain is an emotion, pleasure is an emotion, hunger is an emotion, these are all situational modifiers that affect your decision making process. When you're hungry you prioritize seeking food, when you're in pain you prioritize self care and seeking safety, the mind is a servant to many masters and must juggle their demands constantly.

Suppose you want to make a Terminator, it's AI requires the ability to identify targets, navigate its environment, assess threats, coordinate with allies, etc.

Each of these processes could be its own simulated neural net with its own pattern recognition used to recognize objects and situations, i.e. the Terminator hears gunfire and detects that its taking damage. It recognizes that its under attack and responds accordingly, triggering self preservation (taking cover) and search & destroy (targeting and firing back) behaviors.

Of course having a separate neural net for every thing and situation it needs to recognize is incredibly inefficient, far better to have a single recognition system that identifies everything of potential relevance and a behavioral prioritization system that calls upon behavioral subroutines as needed.

This behavioral prioritization system that is built upon the recognition system is what we call emotions, the Terminator dislikes taking damage, it likes hitting targets with its weapon, whether it chooses to prioritize taking cover or returning fire can ultimately be reduced down to an equation (as our neurons firing can be ultimately reduced down to activation thresholds) and is essentially the Terminator deciding what it wants more.

An emotionless automaton is a mindless automaton, our emotions aren't a design flaw they're an essential part of how our brains prioritize information, that's why they're so prevalent despite being such an apparent disadvantage.
 

Cognisant

Prolific Member
Local time
Yesterday 9:13 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Pattern recognition is just statistical analysis, something computationally light like the Bayesian method.

The primary filter is just discarding anything that lacks relevance, in this case relevance is defined by frequency of occurrence, if you're in a Skinner box and you receive an electric shock every time you press a button you're going to pay a lot more attention to that coincidence than any incidental coincidences like a bell that rings periodically because the bell sometimes coincides with the shock but the button being pressed always coincides with the shock therefore we can assume (though it is a guess) that the button is far more relevant than the bell.

The secondary filters are inherent biases, you're going to pay a lot more attention to the electric shocks because they hurt and that's bad because pain is bad (it's a self reinforcing bias so you don't really get any choice in the matter). Likewise everything you encounter and every situation you find yourself in will be evaluated for relevance by how it relates to your inherent biases. Your daily transit to and from work is of low relevance and can be mostly ignored (forgotten) however that dog that almost mauled you one day is a memory that will stick with you for years to come. In this way your secondary biases optimize your perception, by having a better understanding of what is and isn't relevant to causing pleasure and pain you're better equipped to obtain please and avoid pain.

Morality is an abstraction of these secondary filters hence why people who don't really have well thought out moral principles, who instead act on gut feeling, tend to have a sense of morality that's often inconsistent and self contradicting. True moral principles are completely abstract and have little to do with an AI's design, if an AI can be taught then it can learn to behave according to those moral principles, they're just guidelines to follow.
 

Ex-User (14663)

Prolific Member
Local time
Today 8:13 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Here’s what I think is going on with people’s general (mis)conception of AI nowadays: we take a certain subset of human intelligence related to the neocortex (or whatever it’s called) - which is a very underdeveloped part of our brain viewed in light of the total history of evolution; this part is responsible for things like calculation and logic. We then construct algorithms which do those things better than a human brain can and say: “look, we can make a super-human AI, the possibilities for AI are limitless!” But that’s forgetting that the rest of the brain has seen 5 million years of world history, and this knowledge has been passed down via genes. Each individual dies but the genes jump from one individual to the next. By that process we have evolved one helluva good understanding of reality; an understanding which is not going to be surpassed by a machine unless it keeps learning from data for another 5 million years and storing the data in a near-infinite hard-drive. What we have now in terms of AI are tools, which help us in domains in which the brain is weak, very much like a hammer helps us smashing nails through wood instead of us using our bare fists.
 

Black Rose

An unbreakable bond
Local time
Today 2:13 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
The limbic system and brainstem are no more difficult to understand than the cortex.

It is a myth that A.I. can only follow preprogrammed instruction. That is no more true than saying the human brain network only follows preprogrammed instructions.

The brain is not a blank slate but not preprogrammed either.

superintelligence resides in the scalability of working memory and representation (cause and effect)
 

Ex-User (14663)

Prolific Member
Local time
Today 8:13 AM
Joined
Jun 7, 2017
Messages
2,939
-->
The limbic system and brainstem are no more difficult to understand than the cortex.
My point was that the goal of algorithms mostly pertains to problems solved by this cortex. Whether or not it's easier to understand it physiologically is besides the point.
It is a myth that A.I. can only follow preprogrammed instruction. That is no more true than saying the human brain network only follows preprogrammed instructions.
It's not a myth. As it currently stands it's as sure a fact as 2+2=4. Your second statement also wrong because it's certainly "more true" that AI follows preset instructions (considering that it is absolutely true) than saying the human brain does so, because currently the latter is unknown. And even if the brain "follows instructions" in the sense that it functions in a deterministic fashion, it would still remain to show that it follows the laws of computation. An AI definitely follows laws of computation because it's an algorithm by design; same cannot be said for the brain nor any physical process in nature.
superintelligence resides in the scalability of working memory and representation (cause and effect)
Depends on what you mean by superintelligence. If you can write a for-loop on a computer you can create superintelligence in the sense that you can compute things a human brain cannot. Like I said – superintelligence is a misleading term because it contains some mythical implications, and these mythical implications are facilitated by cases where algos outperform us in tasks we are very bad at.
 

Black Rose

An unbreakable bond
Local time
Today 2:13 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
a simulated brain is not a closed system
because if it was everything would be preprogrammed
open systems of intelligence evolve psychologically
that is what I was getting at
 

Kormak

The IT barbarian - eNTP - 6w7-4-8 so/sx
Local time
Today 10:13 AM
Joined
Sep 18, 2019
Messages
513
-->
Location
Your mother's basement
A much more intelligent INTP programmer than me once explained to me that the cornerstone of being human is free will.

Computers are dumb, meaning they do exactly what they are programmed to do, nothing more, nothing less. Since an AI's output is predictable based on its programming..it can be argued that it can never have free will. It may look like a human, it may talk and behave like one, but it will always be either a Statistic Ai (Machine Learning) or a Deterministic Ai.

Example of problem: Machine Learning AI equates correlation with causality. This means that it would mistake random correlations in data for causal relationships.

Because of this Ai can never be a moral agent and should never be trusted beyond the scope of its programming.

I'm guessing this is why ppl like Bill Gates and Elon Musk have issues with Ai. Its just a tool.

4355
 

Black Rose

An unbreakable bond
Local time
Today 2:13 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
A human brain is preprogrammed by genes and parenting/socialization.

Genes make a structure, socializing modify it.

It is not a blank slate but plasticity exists.

It is possible to create software with a self-modifying structure.

It is possible for the environment to condition A.I. plasticity.

-------------

The human brain is a plastic network with built-in structures (a scaffolding).

In all basicness: The brain rewires itself in loops to understand the environment.

Nothing is stopping A.I. from doing the same except the "How" of doing it.

It is not some axiomatic principle that A.I. cannot rewire itself to understand things.

Hard does not equate to impossible.
 

Kormak

The IT barbarian - eNTP - 6w7-4-8 so/sx
Local time
Today 10:13 AM
Joined
Sep 18, 2019
Messages
513
-->
Location
Your mother's basement
A human brain is preprogrammed by genes and parenting/socialization.

Genes make a structure, socializing modify it.

It is not a blank slate but plasticity exists.

It is possible to create software with a self-modifying structure.

It is possible for the environment to condition A.I. plasticity.

-------------

The human brain is a plastic network with built-in structures (a scaffolding).

In all basicness: The brain rewires itself in loops to understand the environment.

Nothing is stopping A.I. from doing the same except the "How" of doing it.

It is not some axiomatic principle that A.I. cannot rewire itself to understand things.

Hard does not equate to impossible.

If you based everything on empiricism, then yeah that is what we thus far have observed the human brain to be. There is however a difference between how things are in themselves and how we percieve them to be. I'm sure you can agree that our knowlege in this case is rather limited, especially when it comes to the realm of the mind and how the brain produces consciousness.

While that may be a useful self programming intelligent tool, it will probably misinterpret data and end up using logic in it's reprogramming that yields amoral actions. Its software being executed by a "dumb" machine. Can you say that it has a will or that it's conscious?

How would you go about programming "the will" - causality belonging to living rational beeings with the property of freedom, meaning that it can be efficient without other causes determining it. Sponteneity. Regardless of external factors affecting it, a human can still make choices independent of those. You can still chose to do A even if your environment & genetics is influencing you to do B. It may be harder, but its doable for a rational living being.

We have to presume that this will exists, the alternative would be that humans are deterministic machines, which erases the possibility of anyone being a moral agent responsible for their own actions... - It wasn't me! Its my genes and thet other guy who influenced me, but he has no moral agency either... so there is noone to blame really ;) truly it was just a unfortunate sequence of events!

If the latter is true, then yeah the Ai could with enough sophistication become a similar machine.

There was this "Ai", her name was Tay... :P and this happened (wuz fun tho):


She went back to be "reeducated" by the programmers. Hahahaa... RIP TayTay <3
 

Black Rose

An unbreakable bond
Local time
Today 2:13 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
What you are talking of is the inner life. It is what makes us be more than zombies. But it is simple really. I have known this for a long time but hardly strong enough. The way you get "will" is a meta-process. It is a reflection upon things. It is a folding inward. Maturity and layers of development.

Autonomy is self-directedness. I choose what to do after contemplation. I am not a bot. What I am comes from the inside. I know what "coming from the inside" means. The problem with most A.I. critics is that they do not know what a surface appearance is.

Critics do not understand that what they refer to as A.I. is just the persona. They do not understand the ego, the Anima or the self as something A.I. must have also. Modern A.I. is just the persona but that is not all it can be.

A.I. does not have to be only a persona.
Like the brain, A.I. can have a deep inner life, introducing reflection and layers.

Shallow people existing does not preclude wise people from existing.
I find it hard to explain sometimes that A.I. does not have to be shallow.
In mathematical space shallow and wise A.I. do exist.

LyezBxW.jpg
 

BurnedOut

Beloved Antichrist
Local time
Today 1:43 PM
Joined
Apr 19, 2016
Messages
1,309
-->
Location
A fucking black hole
? From where I stand, I see us humans as eternally irrational and emotional peices of chaos and contradiction which only exist because of said irrationality because if you think about it, the only reason anybody ever does anything is because of instincts and hormones, not logic. We eat because we get 'hungry' and since we dont want to die, we look for food
Okay. So you want it to develop ethics, morals and values and at the same time not be irrational like humans?
 
Top Bottom