• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Blade Runner is a psychological master piece.

ZenRaiden

One atom of me
Local time
Today 8:39 PM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I am talking about the movie it self and its natural conclusion.
Not the new Blade Runner, but the first movie.

What we see in the movie is a philosophical idea of what it takes to be self aware.
Self awareness in human context that is, because even bacteria have awareness.

Self awareness in human context is unique.
What the movie depicts is the intricate problem of AI.
The artificial intellect of subhumans who are all too human.
They know they are human.
They strive to live.
But they die anyway.
How can humanity create something that is human, or even better than human, and not fall into this trap, of creating of life, and not making it needlessly suffer.
How can we humans who lack the ability to avoid suffering and strife bring humanity to exist, without arrogance.
AI is ultimately a child of ours.
AI is ultimately a child of ours that will be brought into a cruel reality.
We often don't think about this problem. We assume AI and giving it life is something OK.
But do we have the guts to bring Frankenstein Monster into life.
Do we have the right to bring life on Earth for our own egoistic needs and make it suffer for us.
Do we betray our own humanity and look these creations into their wanting eyes while they die at our own leisure and expense. And retain our own humanity in the process and not become Dr Frankenstein. The real monster.
We are today so obsessed with creating AI we forget what we are doing.
We are trying to play God, and Gods we will become. But will we be able to face our creations with respect humanity deserves.
 

Black Rose

An unbreakable bond
Local time
Today 1:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
AI is ultimately a child of ours.
Do we have the right to bring life on Earth for our own egoistic needs and make it suffer for us.
We are trying to play God, and Gods we will become. But will we be able to face our creations with respect humanity deserves.

You ask moral questions.

What we see in the movie is a philosophical idea of what it takes to be self aware.
Self awareness in human context that is, because even bacteria have awareness.

No, it is not ok to a certain extent to create things that suffer and die.

But a.i. does not need to be made this way.

We only need to extend them to certain capacities.

That is we only need to give them the ability to reflect on their emotions necessary to be moral themselves but not to the extent that they need to suffer extensively.

We understand through neuroscience what is extensive suffering and what circuits pull on each other in that way.

We have studied trauma and depression and pain. A.I. can be made to inhibit such things and also deal with / recover faster than a human can. by the emotional states, we have found. that regulate our own experiences and self-control.

if we make a.i. we will need to put them in a school first.
we will need to help them learn to be self-determined.
make choices and understand consequences.
importantly to find an inner truth to themselves.
to be mature and not empty inside.

suffering is not inevitable, neither is death.
 

ZenRaiden

One atom of me
Local time
Today 8:39 PM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
You ask moral questions.
Anything pertaining to human well being and survival becomes a moral question.
Anything that can impact more than one human becomes a moral question if its so profound it will change we behave and deal with problems.

A virus in the 90s has cost millions of people their lively hood and was a major economical threat, if not cost lives.
So if a simple algorithm or a script spreading over internet can kill or threaten well being it becomes a moral question.

If AI becomes real it will be more dangerous than few bubble algorithms on facebook, and facebook has already cost lives and reputations and well being of people, even if it was unintended consequence.

Cold war was unintended consequence of nuclear weapon development.
Nuclear weapons have 0 IQ and awareness.

Each technology has potential harm and moral consequences to us.

- However life brought about into existence, albeit not feeling but self aware needs further than mere simple moral considerations don't you think? This movie depicts the direct impact of failing to anticipate the unthinkable.

And that is life can have self agency that can contradict our own well being.
Three laws of robotics may not be universally applicable.

No, it is not ok to a certain extent to create things that suffer and die.
Then that would follow kids, dogs, cows, new plants, or breeds or simply life is not OK. I doubt that is what you meant, but there is no law or regulation stopping anyone from creating life.

But a.i. does not need to be made this way.
Data was not made emotional. In Star Trek Data the android was made to be a machine with positron matrix brain.
In the episode Data had to go under trial to be dismantled for science purpose.
He was going to die for our own well being.
However despite having no fear, of being robot, he was self aware enough to want to stay alive.
I recommend watching the episode its very interesting thing.

Because once you bring life to this world it also will have impact on others.

If I bring a child to this world I will impact other people as well, not just my own life.

That is we only need to give them the ability to reflect on their emotions necessary to be moral themselves but not to the extent that they need to suffer extensively.
Yes that is possible. But humans are machines.
Our emotions are what makes us able to want to reflect as well.
Emotions make us adaptive and goal oriented as well as capable of having identity.

We understand through neuroscience what is extensive suffering and what circuits pull on each other in that way.

We have studied trauma and depression and pain. A.I. can be made to inhibit such things and also deal with / recover faster than a human can. by the emotional states, we have found. that regulate our own experiences and self-control.
Maybe we did, I have never heard of this research.
I know we can identify pain receptors and neural pathways, and we know a lot about human bodies.
We still struggle to cure trauma and depression.
if we make a.i. we will need to put them in a school first.
Yes maybe, but we struggle to teach living kids, what makes us think we will be better at the job when it comes to half wit AI>????
 

Black Rose

An unbreakable bond
Local time
Today 1:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Maybe we did, I have never heard of this research.
I know we can identify pain receptors and neural pathways, and we know a lot about human bodies.
We still struggle to cure trauma and depression.

It would have to do with pain and cycles.

Precisely it would be about being stuck in a rigid cycle where you cannot get around an internal obstacle. What stops you from doing something normal people do or makes you exaggerate a behavior over and over is a learned event or just a normal survival mechanism that has been overwhelmed. To recondition it would mean redirecting that mechanism into the proper course. Pain needs to be healed in that case because it is the part that attaches and pulls on you to continue or suppress your actions.

Yes maybe, but we struggle to teach living kids, what makes us think we will be better at the job when it comes to half wit AI>????

Maturity is a different state than immaturity. The frontal lobes are necessary for the A.I. to learn what not to do most of all because it is what inhibits but you cannot form crippling depression nor addiction. A videogame needs boundaries where the A.I. must reflect on its actions not just collapse under pressure or go wild. The point is to keep A.I. in a space where it is safe and pain never overwhelms their system. That would require a video game world, not a real-world environment because the consequences are too great where we exist.
 

ZenRaiden

One atom of me
Local time
Today 8:39 PM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
suppress your actions.
Yes you are right. Essentially I think we have both the same understanding now, but that still means my actions are full of unknowns.
Those are always hazard to me and others.
AI that needs to learn adapt to unknowns, needs to know how to navigate new situations, not old ones.
Much like simulations can only go so far, but the moment AI sees new obstacles that never happened in training its reduced to 0 IQ level.
 

Black Rose

An unbreakable bond
Local time
Today 1:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Yes you are right. Essentially I think we have both the same understanding now, but that still means my actions are full of unknowns.

In A.I. this is called the exploitation vs exploration problem.

How long do I keep doing what I am doing?

vs

When do I decide to do something new?

Where do I go as such to find answers?

AI that needs to learn adapt to unknowns, needs to know how to navigate new situations, not old ones.

That is highly dependent on what resources and goals it has.

Perhaps what its society of the other a.i.s deems worthy of pursuit.

Much like simulations can only go so far, but the moment AI sees new obstacles that never happened in training its reduced to 0 IQ level.

Yes, so what humans do is come up with a hypothesis which is a combination of old rules to generate new rules it can test.

Pegs go in holes but what if a new hole is not the same shape it has seen in training? A human would look for or create a peg the shape of the hole it currently encountered to overcome some kind of obstacle.

It all depends on making new things, new plans new testing. like humans do.

Break things down into little steps so you can build a repertoire of rules to combine later.
 

ZenRaiden

One atom of me
Local time
Today 8:39 PM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
Yes, so what humans do is come up with a hypothesis which is a combination of old rules to generate new rules it can test.

Pegs go in holes but what if a new hole is not the same shape it has seen in training? A human would look for or create a peg the shape of the hole it currently encountered to overcome some kind of obstacle.

It all depends on making new things, new plans new testing. like humans do.

Break things down into little steps so you can build a repertoire of rules to combine later.
But everyone knows this already. That is literally what even simple animals like birds can do. Why do we then fail to build AI?
If its this simple AI should be built already.
I doubt people making AI are oblivious to these simple realities of intellect.
 

Black Rose

An unbreakable bond
Local time
Today 1:39 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Yes, so what humans do is come up with a hypothesis which is a combination of old rules to generate new rules it can test.

Pegs go in holes but what if a new hole is not the same shape it has seen in training? A human would look for or create a peg the shape of the hole it currently encountered to overcome some kind of obstacle.

It all depends on making new things, new plans new testing. like humans do.

Break things down into little steps so you can build a repertoire of rules to combine later.
But everyone knows this already. That is literally what even simple animals like birds can do. Why do we then fail to build AI?
If its this simple AI should be built already.
I doubt people making AI are oblivious to these simple realities of intellect.

I do not know what is happening in specific secret labs but I do know that so far it is an ethical issue. What would the public think if we had A.I. in the virtual world let alone in the real world? Would they try and destroy them? Would they A.I. fight back?

Once we get into the psychological realms rather than just asking about intelligence things become way more complicated. It is like asking why governments around the world have not created a utopia yet because we have enough to feed the world yet we don't. It is about cooperation.

There may be places where a.i. actually exist at human levels but they cannot be revealed is my guess. I know that if I had money to make a.i. I would be concerned about its safety. How do I make the world a place that a.i can safely live in?
 
Top Bottom