• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Putting the AI on the moon

sushi

Prolific Member
Local time
Today 4:28 AM
Joined
Aug 15, 2013
Messages
1,841
---
Will prevent it from going rogue and destroying human civilization.


Examine whether this is workable or not.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:28 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
It would be cheaper not to make it in the first place if dangerous but it would be useless to us on the moon if it was a safe A.I.

It is really hard to get a rouge A.I. onto a spaceship.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:28 AM
Joined
Jun 7, 2017
Messages
2,939
---
I guess it would be an insurance against it doing physical damage. However, as long as you can communicate with this machine using any sort of electronic transmission, you cannot guarantee that it will not do damage to technology on earth, very much like you cannot guarantee the protection of any system from hackers as long as this system is connected to a network.

And I guess, by virtue of being able to penetrate technology on earth remotely, it can do all kinds of physical damage – launching missiles, destroying financial infrastructure, communication infrastructure and so on
 

Haim

Worlds creator
Local time
Today 7:28 AM
Joined
May 26, 2015
Messages
817
---
Location
Israel
The first ,most dangerous and likely aspect is not AI actually killing us but growing too much dependent on technology. Tech can fail. A user can not just change a tech system, we are in the situation that human judgement has no part of the working process, essentially tech make the rules. It is not just a matter of hiring a programmer/engineer to make change to the tech, your tech has dependencies on other tech you don't have control on, when a tech which many other tech depended on it cause a lot of damage. It is not even science fiction, once the new browser/OS/standard make the tech you are dependent on obsolete, your tech die, such as what happened with flash games/apps and flash player, such as what might have happened to this forum(dependence on obsolete software). This is just a small scale example, once we have AI there will be much more damaging result, like the "Black Monday" stuck market crush that happened due to programs(bots) stocks trading.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:28 AM
Joined
Jun 7, 2017
Messages
2,939
---
Agree with ya, Haim. This interdependency effect has definitely been apparent in the markets. Perhaps a better example is the 2010 flas crash. They tried to blame it on orders sent by this one guy trading from his parent's house, which shows that either 1) things are so complex and fragile that the actions of one guy can implode the whole market or 2) things are already so complex that we have no clue why the market does what it does. Either way it's an explosive situation and there will be many more cases like this in the future.
 

Rolling Cattle

no backbone
Local time
Yesterday 11:28 PM
Joined
Jan 24, 2018
Messages
115
---
I'd really like to know from someone who knows a bit more about AI, could it be possible that AI would never go rogue? I maybe could see something strange arising from quantum computers, but for the time being, how can a program do something which it wasn't instructed to do? I know the basics of machine learning, and it seems like a neat Darwinian natural selection concept, but still, I feel as though it's still fundamentally a set of instructions. I haven't read anything yet that the chess playing computer got tired of playing chess and decided to paint instead.
 

Ex-User (14663)

Prolific Member
Local time
Today 4:28 AM
Joined
Jun 7, 2017
Messages
2,939
---
I'd really like to know from someone who knows a bit more about AI, could it be possible that AI would never go rogue? I maybe could see something strange arising from quantum computers, but for the time being, how can a program do something which it wasn't instructed to do? I know the basics of machine learning, and it seems like a neat Darwinian natural selection concept, but still, I feel as though it's still fundamentally a set of instructions. I haven't read anything yet that the chess playing computer got tired of playing chess and decided to paint instead.
I think the question is not necessarily about the nature of a machine's cognition or its free will, but whether the machine can have unpredictable behavior. For example if you somehow instruct a machine to optimize human well-being, and this machine is complicated enough to deduce actions not predictable by humans, it might figure out that in order to reach its objective it has to put all humans into torture chambers for the next century, for research purposes (a la the Japanese Unit 731)

That being said, I think currently, the potential of AI is severely over-hyped
 

sushi

Prolific Member
Local time
Today 4:28 AM
Joined
Aug 15, 2013
Messages
1,841
---
The machine learns "evil" and copy "human nature" from observing the BS in human society.

Therefore sending it to a distant location away from earth will reduce the risk of it learning that kind of behavior.

Also cutting the cord will be useful when the machine is distant.
 
Top Bottom