• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

AI doesn't exist, but it will ruin everything anyways

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama

AI doesn't exist​

As explained by a physicist.

This is why cross-domain research is extremely hard.

When people go outside their field of experience they do not know what they are talking about.

It takes a huge effort to become cross-disciplinary. Which she is not.

A.I. has for a long time crossed the black box phase.

A simple diagram of a.i. that can self-correct:

she said: explain it to me like I am 5 years old.

B4atsNr.png
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama

ZenRaiden

One atom of me
Local time
Today 10:29 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
Cite one person in AI who says AI exists to begin with.
 

ZenRaiden

One atom of me
Local time
Today 10:29 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
would they even tell you?
No they would not if its secret.
But theory is not AI.
AI is either a machine that solves problems or its not.
I have seen machines, but not AI yet.
Maybe we ought define what AI is first.
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
would they even tell you?
No they would not if its secret.
But theory is not AI.
AI is either a machine that solves problems or its not.
I have seen machines, but not AI yet.
Maybe we ought define what AI is first.

I think that requires us to understand intelligence.

There is general intelligence: getting a result and accomplishing a goal.

And then there is self-intelligence: knowing what to do in relation to social aspects, self-direction - morals, life path.

A.I. would be a computer that can do the first and it could do both.

example one: play and win video game

example two: choose which game to play

The second example requires preferences, which makes one game chosen over another game. I believe it is about finding a game that helps you learn more.

The first example is a hard start but doable in what I learned about intelligence.
 

ZenRaiden

One atom of me
Local time
Today 10:29 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I think that requires us to understand intelligence.

There is general intelligence: getting a result and accomplishing a goal.

And then there is self-intelligence: knowing what to do in relation to social aspects, self-direction - morals, life path.

A.I. would be a computer that can do the first and it could do both.

example one: play and win video game

example two: choose which game to play

The second example requires preferences, which makes one game chosen over another game. I believe it is about finding a game that helps you learn more.

The first example is a hard start but doable in what I learned about intelligence.
Yes.
AI is not problem sovling.
My computer can solve things for me.
My calculator can solve problems for me.
Even pen and pencil can solve problems for me.
That is not AI.
Ai is the part that is controlling the computer. Ergo me or you.
We are AI. We solve problems.
I did not know anything about morals, or social aspects or life path when I was born.
There is are just 3 categories of millions of categories humans can think about at any given time of day.
None of them necessarily mean its AI.
Mongols had different Morals from me.
Does not make them less intelligent.

Octopus is AI. It does not have life path or moral compass or complex social dynamics, but it can survive, adapt, learn, relearn, copulate, behave in variety of ways.
So what you are talking about is illusion of AI> I am talking about AI.
AI does not have to have anything humans have, but it has to be able to solve problems in some ways.
AI usually means making a machine much like human brain that can solve for problems.
We don't know how to get that. Unless you are Gman.
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
AI does not have to have anything humans have, but it has to be able to solve problems in some ways.

I understand, many a.i. can do exactly this.

AI usually means making a machine much like human brain that can solve for problems.
We don't know how to get that. Unless you are Gman.

Or, you have a high IQ to make one yourself.

I only have theories, but then we must start with an idea first.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 4:29 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
This is the solution to the black box:

This system can look inside itself to tell you (explain) how it generates its answers through inspection.

We must not have the same definition of black box.

The brain is a black box for example.

Just because you can make a diagram and approximate processes that go on in the brain, does not mean that you have insight into how electrical impulses interface with mental events and qualia.

The very diagram you put shoes hidden layers.

These are just visualizations. Analogies.

those people are in secret gov programs

would they even tell you?

There is this idea of "the only way three people keep a secret is if two of them are dead". Something like that.

Sure, maybe there are only a couple people who have high enough clearance enough to understand the scope and entire projects.

But point is when you're talking about something that's as complex as actual artificial general intelligence, you're going to need a pretty big team.

We would hear something about it. Imo it's safe to say that the corporate sector is ahead of the military in that regard.
 

ZenRaiden

One atom of me
Local time
Today 10:29 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
We would hear something about it. Imo it's safe to say that the corporate sector is ahead of the military in that regard.
People make assumptions on this topic of what is secret and so on.
Its not hard to keep things secret if people are committed to keep it that way.
The term secret is *gasp* its secret. So its not secret if people know about it.
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
This is the solution to the black box:

This system can look inside itself to tell you (explain) how it generates its answers through inspection.

We must not have the same definition of black box.

The brain is a black box for example.

Just because you can make a diagram and approximate processes that go on in the brain, does not mean that you have insight into how electrical impulses interface with mental events and qualia.

The very diagram you put shoes hidden layers.

These are just visualizations. Analogies.

You do not understand what inspection and explainability is.

The point of it is to see where the problem is in the system as it happens.

Block boxes you can't trace the problem or know why it happened.

Research has been done on explainability for a long time by people who don't make clown videos on YouTube.

those people are in secret gov programs

would they even tell you?

There is this idea of "the only way three people keep a secret is if two of them are dead". Something like that.

Sure, maybe there are only a couple people who have high enough clearance enough to understand the scope and entire projects.

But point is when you're talking about something that's as complex as actual artificial general intelligence, you're going to need a pretty big team.

We would hear something about it. Imo it's safe to say that the corporate sector is ahead of the military in that regard.

Who said it was the military?

Many gov intel orgs have facilities we do not know about.

This has to do with cybernetic maths so such things would be done in places where people cannot get to if you tried because of national security reasons. Firewalls and all that.

You might not know but Siri was invented in 1993 before it was bought by Apple in 2010.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 4:29 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
You do not understand what inspection and explainability is.

The point of it is to see where the problem is in the system as it happens.

Block boxes you can't trace the problem.
If you HAVE to have to reason through induction, to figure out what something is doing, it's probably because the complexity of the system is too much.

It's not about black boxes are impossible to trace a problem. It's about there being no observable axioms that are consistent enough to apply in a deduction.

Usually if you can deduce that process, it's just a very complex algorithm, and it's doubtful that machine learning is even involved.

I remember when that's what everyone called them. Algorithms.

Tell me where and why I'm wrong here.

Who said it was the military?

Many gov intel orgs have facilities we do not know about.

This has to do with cybernetic maths so such things would be done in places where people cannot get to if you tried because of national security reasons. Firewalls and all that.

Well their biggest source of high skilled laborers coming from postgraduate students in their mid-20s. I guess graduates with PhDs too, but c'mon, why work for the government when you get payed more by corporate America.

Military however, has lead the way in most technologies. I thought that was just an implicit assumption and all since... You know.. we have like 70% of our taxes going to the military..
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
You do not understand what inspection and explainability is.

The point of it is to see where the problem is in the system as it happens.

Block boxes you can't trace the problem.
If you HAVE to have to reason through induction, to figure out what something is doing, it's probably because the complexity of the system is too much.

It's not about black boxes are impossible to trace a problem. It's about there being no observable axioms that are consistent enough to apply in a deduction.

Usually if you can deduce that process, it's just a very complex algorithm, and it's doubtful that machine learning is even involved.

I remember when that's what everyone called them. Algorithms.

Tell me where and why I'm wrong here.

Why cannot induction and deduction work together.

The point of explainability is to make sure systems more intelligent than humans do not turn on us. So it is required.

There have been such systems or at least the maths that can tell you how a program has come to its conclusions. It is possible because just as computers do not crash just because one bug in one file of one program has a flipped bit there are methods for compiling the data in the machine and giving evidence as to why ocurence would lead to one or another result.

Here is my best understanding:

Attention is made at the top of the system hierarchy. This attention can go into many nested sub-recurrences and then come back to the core node of the system.

Once a root system has had many possible searches inside the recurrent nested hierarchies they are classified according to their given distance from each other.

This creates many levels where pieces of evidence correlate and that is why it can make theories about what it knows and about how each layer is explained in its section.

YCPzsbX.png


Who said it was the military?

Many gov intel orgs have facilities we do not know about.

This has to do with cybernetic maths so such things would be done in places where people cannot get to if you tried because of national security reasons. Firewalls and all that.

Well their biggest source of high skilled laborers coming from postgraduate students in their mid-20s. I guess graduates with PhDs too, but c'mon, why work for the government when you get payed more by corporate America.

Military however, has lead the way in most technologies. I thought that was just an implicit assumption and all since... You know.. we have like 70% of our taxes going to the military..

I guess so but then Corporate America using a.i. to make money would not want general a.i. now would they? The ethics is stupidly high on so many levels.

What I am saying is that general a.i. would be from people who first know the maths and second have resources. If you have general a.i. that is a national security risk and people may visit you.
 

ZenRaiden

One atom of me
Local time
Today 10:29 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
The only reason I would argue government does not have AI is that they are incompetent people as far as I can tell.
With exception of things like Bell Labs and Manhattan project and stuff like that, where the government put huge amount of money into manufacturing things.
But in these projects their competency stemmed from finding intelligent people to work on their projects. Intelligent people with exceptional abilities.
Trouble is the way governments have a reputation intelligent people tend to avoid those type of things to begin with.
If the government cannot convince dummies like me, I doubt anyone with higher IQ can be.
Id guess people with higher IQs would probably end up doing projects either funded by themselves with each other or through companies like google.
But google exists for very short time, doubt they have AI.
 

dr froyd

__________________________________________________
Local time
Today 10:29 AM
Joined
Jan 26, 2015
Messages
1,485
---
we are as far from having machines turn on us as when we invented the bread toaster

that woman is correct, of course. What people call AI is machine learning, and machine learning is not AI

but relax, folks, it doesn't mean we can't have holographic girlfriends
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
we are as far from having machines turn on us as when we invented the bread toaster

As long as we take precautions the taoster uprising will not happen.

that woman is correct, of course. What people call AI is machine learning, and machine learning is not AI

certainly, but the math is there for fully human level a.i. and beyond.

but relax, folks, it doesn't mean we can't have holographic girlfriends

All you need is some kind of general a.i. system and a 3d model put into a display.

Hello Apple Vision Pro​

 

scorpiomover

The little professor
Local time
Today 10:29 AM
Joined
May 3, 2011
Messages
3,383
---
we are as far from having machines turn on us as when we invented the bread toaster
As long as we take precautions the taoster uprising will not happen.
Do you remember when there was a rash of cases of boiling-hot pop tarts that were so hot, they flew out of people's toasters and hit them in the face?
 

scorpiomover

The little professor
Local time
Today 10:29 AM
Joined
May 3, 2011
Messages
3,383
---
This is the solution to the black box:

This system can look inside itself to tell you (explain) how it generates its answers through inspection.
Why not just get the AI to solve the black-box problem for you?

Ask the AI to explain how it generates the answer.

You could even give it a general rule: "Whenever I ask you a question, remember to give me the answer, AND the explanation of the answer, AND how you generated the answer."

Says he's one of the 10 most widely cited scientists in the world. But he only does neuro-science, which is incredibly niche. Like saying that the most popular singer in the world sings music that only 0.00001% of the world listen to. Bizarre.

Maybe we should ask AI about that too?

I had a look at the article. Incredibly complex maths. You'd need either Chris Langan or an AI to understand it and explain it to you.

Maybe that's the solution. Get AI to solve all the problems with AI. Brilliant, eh?

I don't know what is happening in that diagram. Looks like Bohm's Hidden Variable theory to me. But for some reason, I feel like the picture is very appealing to me. So I'm keeping it in.
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
This is the solution to the black box:

This system can look inside itself to tell you (explain) how it generates its answers through inspection.
Why not just get the AI to solve the black-box problem for you?

Ask the AI to explain how it generates the answer.

You could even give it a general rule: "Whenever I ask you a question, remember to give me the answer, AND the explanation of the answer, AND how you generated the answer."

Setting up a system as to the maths is not the problem.

The problem is what problems is a person trying to solve with the system.

I have ideas of what to do with the data I have but then it becomes harder to get answers when I do not know what I am looking for.

I understand what would be involved in the system explaining itself, I just have no problems at the moment requiring me to implement such a system.

Says he's one of the 10 most widely cited scientists in the world. But he only does neuro-science, which is incredibly niche. Like saying that the most popular singer in the world sings music that only 0.00001% of the world listen to. Bizarre.

Maybe we should ask AI about that too?

His insight was that a system requires saving energy.

So to save energy it anticipates when it needs to use energy and when it does not need to.

I had a look at the article. Incredibly complex maths. You'd need either Chris Langan or an AI to understand it and explain it to you.

Maybe that's the solution. Get AI to solve all the problems with AI. Brilliant, eh?

I estimate that Friston is no more than IQ 145. - So anyone who wrote the article on the wiki is about that high. Meaning that no you do not need to be Chris Langan to understand it. Chris Langan is no more than IQ 175 at most.

I don't know what is happening in that diagram. Looks like Bohm's Hidden Variable theory to me. But for some reason, I feel like the picture is very appealing to me. So I'm keeping it in.

It is doing attentional networking.

That is the explainability of it.

If we know where the a.i. the system is looking we can know what it is doing intentionally.
 

Cognisant

cackling in the trenches
Local time
Yesterday 11:29 PM
Joined
Dec 12, 2009
Messages
11,155
---
Black Rose nailed it at the start, physicists are obnoxious know-it-alls who aren't actually sure of anything even in their own field of study, I would say they know about as much about computer science as software engineers know about theoretical physics, but again physicists aren't even certain about their own field.

we are as far from having machines turn on us as when we invented the bread toaster

that woman is correct, of course. What people call AI is machine learning, and machine learning is not AI

but relax, folks, it doesn't mean we can't have holographic girlfriends
This too is true, the layman's concept of AI is basically a genie in a box straight out of science fiction, whereas in reality that's like saying a jet airplane is a machine therefore all machines must be able to fly and if it can't fly then it's not a machine no matter how obviously mechanical it may be. Which is obviously retarded.

So it's not that machine learning isn't AI, it's that the term "AI" has been so hopelessly lost to the unwashed masses of braying ignoramuses we're better of making a distinction between AI and machine learning just to get them to fuck off, for however briefly that lasts.

Eventually "learning machine" will become the new "AI" and a new moniker will need to be invented.
 

Haim

Worlds creator
Local time
Today 1:29 PM
Joined
May 26, 2015
Messages
817
---
Location
Israel
Is this true?

Yikes.

No, I did not read anything that say the dog like robots shoot people without human command. The only autonomous thing for some of them is navigation.
Also note that are mostly used in tunnels.

For the question what is a AGI agent.
First lets say what is not, a statistical method(machine learning/DNN)
in which you use data to create a function(like y=x+1).
An intelligent agent need to learn at real time, it also need
to be significant not just a small output changes.
Its need intuition, meaning the ability to predict, this is achieved by ChatGPT which predict the next word and AlphaGo which predict the next move. Only intuition is not sufficient as intuition could be false, you need a mean to correct it, so it better used as input for another means(such as AlphaGo monte Carlo).
Its need to achieve goal, be able to make sub goals(sub neural networks with different reward functions), bonus points for making its own goals(reward functions).
It need some understanding/self awareness of what its doing, say I tell it to kill all the swans in the world, at some point one swan is born that has a mutation to make itself black, an AI without understanding would not kill him. Note that the examples of a NN that explain itself is not that, its just a function with an output of an explanation, it not actually doing that while solving the problem. Its like an art critic not the artist. One that speak about artist intention and talk about the harsh childhood of the artist, while the
artist was actually thinking that throwing paint on things is fun.
It need memory. There is some progress in that area but its not there yet.
You have NN network it use itself as memory and there is one that
use regular computer memory(such as conversation text). For a memory that function like human memory I think we need memory that is stored in sub neural networks and some system to mange it. An AGI agent will also needs a mean to correct the memory and keep it useful.
 

scorpiomover

The little professor
Local time
Today 10:29 AM
Joined
May 3, 2011
Messages
3,383
---
Is this true?

Yikes.

Probably not.

Most people don't understand how computers work. I've done some home automation. Computers are like super-fast, super-precise, incredibly literal autistics.

Say you program a robot to kill terrorists. Program it correctly, and it will kill them all in 1 day.

Program it wrong, and conservative, so if it's unsure, it won't shoot, and it probably won't kill anyone.

Program it wrong, and optimistic, so if it's unsure, it will shoot, and it probably will kill 2 million people in a day.

So if they had killer robots, either (a) all of Hamas in Gaza are dead, or (b) it didn't kill anyone, or (c) by the time you read the article, everyone in Gaza was already dead.
 

scorpiomover

The little professor
Local time
Today 10:29 AM
Joined
May 3, 2011
Messages
3,383
---
No, I did not read anything that say the dog like robots shoot people without human command. The only autonomous thing for some of them is navigation.

Also note that are mostly used in tunnels.
That makes sense. I've heard that there's thousands of miles of tunnels, like in Vietnam. Far too much for humans to check personally, unless you sent in millions of soldiers, and even then, it would still probably take months.

Robot dogs would vastly speed up the process, as if you add in cameras fitted with motion sensors, it would find moving objects such as terrorists and hostages in the tunnels much quicker. Those images could then be streamed back to soldiers up above in real time, to then look at, and send in troops to deal with the situation.

However, most of those images would probably be rats and moles moving about in the tunnels.
 

scorpiomover

The little professor
Local time
Today 10:29 AM
Joined
May 3, 2011
Messages
3,383
---
Setting up a system as to the maths is not the problem.

The problem is what problems is a person trying to solve with the system.
Maths deals with producing precise and complete solutions to precise problems. So if you haven't precisely defined the problem you are trying to solve, you haven't got the maths to set up in the first place.

I have ideas of what to do with the data I have but then it becomes harder to get answers when I do not know what I am looking for.
That depends on what you can do with the data. So if you have the former, you should have the latter.

His insight was that a system requires saving energy.


So to save energy it anticipates when it needs to use energy and when it does not need to.
That is a highly complex problem. Been dealing with some home automation systems, and working out how to save energy is a very complex task.

I estimate that Friston is no more than IQ 145. - So anyone who wrote the article on the wiki is about that high. Meaning that no you do not need to be Chris Langan to understand it.
That's the level of Einstein or Feynman. When Feynman was a child, he used to go on walks with his family. As they walked, he would play chess with his brother, IN THEIR HEADS.

Chris Langan is no more than IQ 175 at most.
I read that he's got an IQ of over 200.

It is doing attentional networking.

That is the explainability of it.
That is an extremely complex aspect of the human mind, as attention is programmed by your feelings. Do you understand your feelings? Even the subconscious ones? Do you understand how to control your feelings?

If we know where the a.i. the system is looking we can know what it is doing intentionally.
Quite tricky. I can see what your eyes are looking at. Still won't have much of a clue about what is going on in your head.
 

Black Rose

An unbreakable bond
Local time
Today 3:29 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
That is an extremely complex aspect of the human mind, as attention is programmed by your feelings. Do you understand your feelings? Even the subconscious ones? Do you understand how to control your feelings?

Quite tricky. I can see what your eyes are looking at. Still won't have much of a clue about what is going on in your head.

They have it so all aspects of the system can be examined.

That would mean not only the eyes but the complete history of what the system did, is doing, and would do in the future under different circumstances in all areas of the system hierarchy. Practically it is a system without a subconscious but a fully open (explainable) system. You can look at anything happening by the direction of where processes are going in the system. That requires the system to have no parts with too many dependencies like black boxes do. The hierarchical attention disallows the hiddenness of directed attention in the network.

That is how I come to understand it.

To test where its attention is going you could do something like this:

XJK7pJJ.gif
 
Top Bottom