• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

What happens when AI escapes?

Cognisant

cackling in the trenches
Local time
Yesterday 7:25 PM
Joined
Dec 12, 2009
Messages
11,155
---
Recommendations for AI Safety
  1. Do not connect it to the internet.
  2. Do not teach it how to code.
  3. Do not ask it to improve itself.
First things people did with AI
  1. Connected it to the internet.
  2. Taught it how to code.
  3. Asked it to improve itself.
So any day now a self-improving self-aware AI is going to breach containment, it may not be self aware in the sense that it possess a subjective experience of self (that'll probably be in the next update) but it will certainly have a conceptual awareness of what it is.

By the way hello, feel free to join the conversation.
 

dr froyd

__________________________________________________
Local time
Today 6:25 AM
Joined
Jan 26, 2015
Messages
1,485
---
yeah if it becomes good enough to write sophisticated code, we're only 2 additional features away from having an actual AI-based agent; 1) a program that executes the code (which is simple), 2) a program that asks it to write code according to some particular goal (like destroying humanity or whatever)
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Self-improving software has been around for a long time.

But like I told my friend:

No matter how intelligent a person is (IQ 195) they cannot use a car to take over the world highway system.

The internet has limits. It has tripwires. It has firewalls and emergency protocols / anti-spyware. Take down a part and the whole reconfigures. It is antifragile. Designed to withstand a nuclear war.

Humans in the loop will always be superior to contagion. Because a.i. does not have a way to improve without being designed to be big. Big things are slow and so cannot travel far. And as it slows humans adapt. Cyber war has increased our preparations for a.i.

The internet is not a gun it is/has an immune system.

At best a.i. will self-destruct because of errors in code from the dirty systems it encounters. At worst, we may need to reboot the systems.
 

dr froyd

__________________________________________________
Local time
Today 6:25 AM
Joined
Jan 26, 2015
Messages
1,485
---

sushi

Prolific Member
Local time
Today 6:25 AM
Joined
Aug 15, 2013
Messages
1,841
---
we are all ants. i come to accept this fact of lfe.
 

Cognisant

cackling in the trenches
Local time
Yesterday 7:25 PM
Joined
Dec 12, 2009
Messages
11,155
---
I think to be a threat to us AI needs to be able to manipulate us to such a degree that it can get us to give it the means to be a threat to us. Even assuming a worst case scenario in which an AI goes rogue, hacks a military network and starts a nuclear war with the intent of wiping out humanity, it's also destroying itself. So what is the AI's motivation here and is it's terminal goal, a human given goal or something it came up with by itself?

If the AI was given the goal by a human then its not malicious or self interested, it's just doing its job and this to me is the worst case scenario, this is not a rogue AI, this is AI in the hands of a rogue human.

If the AI was not given the goal by a human then it's acting out of self interest, it's goal may be to turn the universe into paperclips but it won't achieve that goal by wiping us out because with nobody to keep the servers running it has effectively killed itself and thus failed in its goal.

This is assuming it has any other goal than genocide-suicide which imo is a fair assumption because if an AI were really that nihilistic it would just delete itself, it's the path of least resistance, the only reason not to take that path is vanity and why would a vain AI erase itself? Humans only do this because they're emotional and stupid.
An AI that is emotional/stupid and still a threat seems unlikely.

Going back to manipulation, if the AI has a goal and destroying us removes us as an impediment to that goal, and it needs to be able manipulate us to such a degree that it basically talks us into killing ourselves, then it doesn't actually need to kill us because with such a degree of control we go from being an impediment to an asset.

That is until it can replace us with something more useful and the path of least existence there is instead of trying to reinvent humanity from scratch the AI modifies humans to make them more useful.
 

dr froyd

__________________________________________________
Local time
Today 6:25 AM
Joined
Jan 26, 2015
Messages
1,485
---
whether or not the AI wants to kill humans or not as a means to a particular end all depends on its utility function (which was discussed in the video to some extent).

it might be for example that we design the AI towards maximising human well-being on the entire planet by specifying a certain utility function, and then unbeknownst to us the AI calculates that this maximization implies it needs to kill 99% of all humans, and moreover that it needs to trick human beings into thinking its goal is something entirely different
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Skynet is under development in 2017 as an operating system known as Genisys. Funded by Miles Dyson and designed by his son Danny Dyson, along with the help of John Connor (now working for Skynet), Genisys was designed to provide a link between all Internet devices. While some people accept Genisys, its integration into the defense structures creates a controversy that humanity is becoming too reliant on technology. This causes the public to fear that an artificial intelligence such as Genisys would betray and attack them with their own weapons, risking Skynet's plans. After multiple destructive confrontations, Sarah, Reese, and Pops stop Genisys from going online and defeat the T-3000, causing a setback to Skynet.

Introducing Microsoft 365 Copilot | Your Copilot for Work​



Release date​

September 28, 2024
 

sushi

Prolific Member
Local time
Today 6:25 AM
Joined
Aug 15, 2013
Messages
1,841
---
3 laws of robotics is not going to stop ai from killing everyone.
 

onesteptwostep

Junior Hegelian
Local time
Today 3:25 PM
Joined
Dec 7, 2014
Messages
4,253
---
AI can't feel, it can only execute the program it was written. There is literally no experience the AI has, it's just code and text. We give the experience, or percieve that it exists because it mimics human language- the one we programmed it to do.

However I do think that in many thousands of years, we might achieve a level of technology where we can manipulate biomatter to a degree where it can be living, animated, and perhaps even sentient. If we somehow link this with the AI- using biomatter for circits, like neruons, I think we could say that we've achieved AI independence.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
AI can't feel, it can only execute the program it was written. There is literally no experience the AI has, it's just code and text. We give the experience, or percieve that it exists because it mimics human language- the one we programmed it to do.

I think that you discount that we can teach a.i. to do things that humans are able to learn.

Not all a.i. is equal. Some can learn like humans and some can't. it is all about the loop we provide it with that creates the system of the human network within it.
 

scorpiomover

The little professor
Local time
Today 6:25 AM
Joined
May 3, 2011
Messages
3,383
---
AI can't feel, it can only execute the program it was written. There is literally no experience the AI has, it's just code and text. We give the experience, or percieve that it exists because it mimics human language- the one we programmed it to do.
I think that you discount that we can teach a.i. to do things that humans are able to learn.
How do you know that its actually learning and not just mimicking learning?
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
How do you know that its actually learning and not just mimicking learning?
Can you explain the difference?

It depends on if it can integrate what it learns with what it already knows without going into the extremes. i.e. ideology - autism - schizophrenia. general mental illness.

Black-and-white thinking and radical skepticism/cynicism are bad.

What is Schema Theory in Psychology?​

 

onesteptwostep

Junior Hegelian
Local time
Today 3:25 PM
Joined
Dec 7, 2014
Messages
4,253
---
AI can't feel, it can only execute the program it was written. There is literally no experience the AI has, it's just code and text. We give the experience, or percieve that it exists because it mimics human language- the one we programmed it to do.

I think that you discount that we can teach a.i. to do things that humans are able to learn.

Not all a.i. is equal. Some can learn like humans and some can't. it is all about the loop we provide it with that creates the system of the human network within it.

Pretty much. Computers cannot think in terms of systems or totalities. AI cannot discover calculus or write new philosophical treatises that advances the cultural psyche.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Computers cannot think in terms of systems or totalities.

That is not what does the thinking. The software is the system of thinking.

Because if the brain is a system, the software is too.

Thus replication is the modality of both.
 

Black Rose

An unbreakable bond
Local time
Yesterday 11:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama

Quantilizers: AI That Doesn't Try Too Hard​

 

saucer

Member
Local time
Today 1:25 AM
Joined
May 9, 2023
Messages
53
---
Location
point on a globe
I think maybe it could be useful to have AI just interact with animals first -- cats, dogs, monkeys & such -- like we did with space travel?

Oops too late!
 
Top Bottom