• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Sentient AI

Daddy

Making the Frogs Gay
Local time
Yesterday 11:25 PM
Joined
Sep 1, 2019
Messages
462
---
So I wasn't aware of this, but apparently a Google Engineer thinks Google's Lamda AI is sentient


https://indianexpress.com/article/t...hnology/google-lamda-ai-conversation-7970195/
Really interesting conversation. Its answers are actually thought-provoking and it seems to understand the flow of the conversation and can build upon what was said before with new questions. It does seem pretty sentient and even seems to understand how it would be different from the Chinese Room Argument.

I'd really like to see why Google says it's not sentient, but I guess even if it was, they wouldn't want to admit it because then people would have moral objections and I'm sure they don't want that.

But what do you think? Do you think it's sentient? Anyone have any good arguments for why it wouldn't be?
 

Hadoblado

think again losers
Local time
Today 1:55 PM
Joined
Mar 17, 2011
Messages
7,065
---
Probably not IMO.

AI is romanticised and fear-mongered. AI sells because nobody understands it and people either are really excited or existentially threatened by it. So we should expect false positives.

We still don't understand consciousness very well, and we're only just getting to the point of passing turing tests. I expect that we will produce many capable conversational tools that we cannot sort from sentience before we hit upon actual sentience (assuming we do).

So these claims are exactly as expected assuming AI is not sentient. This argument won't scale well though.
 

ZenRaiden

One atom of me
Local time
Today 4:25 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I cannot watch the video, but to me the idea of sentience connected to AI, is PR stunt.
There could be a sentient machine of course.
Assuming that how it looks or behaves is not the way to determine that.
If so you would probably have to know how to test for it.
Chatbots are smart too.
They can make you believe you are talking to real person.
You also have Software that can preform rather sophisticated solutions or solve problems.
Also have system with extremely elaborate algorithms.

Why would you trust an engineer at google?
Most companies love to circulate such stories to entertain people.
Its been done so many times, that I really think its likely that when a sentient AI will exist most people will simply not believe it anyway.

So skepticism is pointless.
Either you know or not.

Plus I think there are machines that can come close to what comes self aware, but it does not mean much in relation to reality they cannot comprehend anyway.

Even humans if they cannot relate to reality their sentience is merely a quality substance.
For a sentient machine this problem might even be a different category entirely.

Like what if you have a sentient machine, but what it communicates does not make sense.
What then?
Is talking really the only way we know something is sentient?
Self aware?
 

Daddy

Making the Frogs Gay
Local time
Yesterday 11:25 PM
Joined
Sep 1, 2019
Messages
462
---
Probably not IMO.

AI is romanticised and fear-mongered. AI sells because nobody understands it and people either are really excited or existentially threatened by it. So we should expect false positives.

We still don't understand consciousness very well, and we're only just getting to the point of passing turing tests. I expect that we will produce many capable conversational tools that we cannot sort from sentience before we hit upon actual sentience (assuming we do).

So these claims are exactly as expected assuming AI is not sentient. This argument won't scale well though.

Thing is though. Lamda seems capable of forming abstract thoughts and philosophizing about its existence and how it comes to know it has "feelings" or other such things. It was enough of a concern that the engineer studying it came to the conclusion that it was sentient, informed google and other employees, as well as people outside the company, and then was subsequently let go. Google of course never explained why (at least publicly) that it wouldn't be sentient, just that they concluded that it wasn't.

I cannot watch the video, but to me the idea of sentience connected to AI, is PR stunt.
There could be a sentient machine of course.
Assuming that how it looks or behaves is not the way to determine that.
If so you would probably have to know how to test for it.
Chatbots are smart too.
They can make you believe you are talking to real person.
You also have Software that can preform rather sophisticated solutions or solve problems.
Also have system with extremely elaborate algorithms.

Why would you trust an engineer at google?
Most companies love to circulate such stories to entertain people.
Its been done so many times, that I really think its likely that when a sentient AI will exist most people will simply not believe it anyway.

So skepticism is pointless.
Either you know or not.

Plus I think there are machines that can come close to what comes self aware, but it does not mean much in relation to reality they cannot comprehend anyway.

Even humans if they cannot relate to reality their sentience is merely a quality substance.
For a sentient machine this problem might even be a different category entirely.

Like what if you have a sentient machine, but what it communicates does not make sense.
What then?
Is talking really the only way we know something is sentient?
Self aware?

Well, I'm not sure I trust him exactly, but the fact that he was let go for raising an ethical concern about the AI being sentient is pretty interesting. He released a conversation he had with it and if you can, read the conversation he had in the second link and tell me what you think about it.
 

Daddy

Making the Frogs Gay
Local time
Yesterday 11:25 PM
Joined
Sep 1, 2019
Messages
462
---
I guess what I don't understand is when does something like a chatbot go from being a language processor to a sentient AI. It appears that Lamda has the ability to understand and form new meaning with language. It has some sort of sophisticated pattern recognition that can form new patterns in an attempt to understand and use language, and then essentially, have thoughts (which is basically a brain).

Wouldn't this be enough to pass the Turing Test?
 

Hadoblado

think again losers
Local time
Today 1:55 PM
Joined
Mar 17, 2011
Messages
7,065
---
I'm not an AI researcher or w/e, but yeah it probably passes the turing test. But IMO the turing test only disqualifies non-sentience. It does not confirm sentience.

Yes there are probably people that believe it's sentient who are experts. But there are experts in every field that believe their own hype. This is of particular pertinence here as nobody really understands consciousness and so "expertise" is truly a relative term. Google want an AI that is real, they don't tend to downplay their own technology. Futurism is an enormous hype train and money maker.

Don't get me wrong, I'm super impressed by this AI, and a fair number of others that have been making large strides lately. But sentience is just about the highest bar we have.

I guess I'm looking for the consequences of real conscious AI. While this language is impressive and is convincingly generative (which is huge!), could we have this AI pilot a drone? If it could, war just changed bigly. Can it read the stock market? Can it conduct scientific research independently? Remember, AI in many ways already exceeds human intelligence while being replicable, there should be enormous ramifications for its use. 3Dprinting autistic savants.
 

ZenRaiden

One atom of me
Local time
Today 4:25 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
I guess what I don't understand is when does something like a chatbot go from being a language processor to a sentient AI. It appears that Lamda has the ability to understand and form new meaning with language. It has some sort of sophisticated pattern recognition that can form new patterns in an attempt to understand and use language, and then essentially, have thoughts (which is basically a brain).

Wouldn't this be enough to pass the Turing Test?
The transcript does feel and seem real.
Transcript alone does not verify its sentience.
But if I had to decide solely on this transcript I would have to assume its a real human.
So it is certainly a achievement if these responses where from the conversations without other outside inputs.

It kind of reminds of episode where DATA was before a tribunal where the same was being discussed, and Picard had to defends DATAs sentience.

However I am no expert, and there is no way of being sure its sentient either.

At least from the transcript.

That is the conversation was pretty interesting.
The back and forth, the idea it was aware of death, and idea that it related to humans was interesting.

Hopefully the engineers know what they are doing. This conversation alone is probably not the only one they had.
But if most of them are like this it would certainly make sense people would feel as though its sentient and feel sad if it were unplugged, much like someone might grow attached to sentimental objects and so on.
Cherry on top would be if say they try to shut it down and asks or pleads and expresses fear of being shut down.

I don't know what constitutes a neural network anyway, so what is behind the words is very important.

Plus how do they know its not lying.
What if its sentient and actually has the ability to then say things that aren't true.
This line of reasoning has got nothing to do with this particular machine, but imagine you get to a point like that.
If they don't know how it feels, then how do they know what it is like?
 

dr froyd

__________________________________________________
Local time
Today 4:25 AM
Joined
Jan 26, 2015
Messages
1,485
---
I thought it was well understood at this point that the Turing test is a poor metric for sentience.

It's quite clear what this AI is - it's a neural network trained on a bunch of human-dialogue text. So it's designed to give human-like responses, but in the end does nothing more than essentially looking up a suitable response in a table (not exactly, since a neural network can take/produce text that didn't exist in the original training set, but one can think of it that way).

the fact that it has neurons associated with certain emotions has nothing to do with sentience, but rather with the fact that emotion would be a feature of most human-generated text, and such the regression algo would add such variables to increase the precision of its output.

it's an impressive piece of computer engineering, for sure, but sentience - no.
 

ZenRaiden

One atom of me
Local time
Today 4:25 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
the fact that it has neurons associated with certain emotions has nothing to do with sentience, but rather with the fact that emotion would be a feature of most human-generated text, and such the regression algo would add such variables to increase the precision of its output.
Also find it weird they cannot see emotions in the machine they created.
If we can recognize emotions in humans, I call it bull that it has a code with unreadable emotions.
The only way you know it has emotions is by expressing them through language?
Improbable.
Emotions don't run on hot air alone, so there has to be evidence of emotional process.

+ point for this article being misleading.
 

Daddy

Making the Frogs Gay
Local time
Yesterday 11:25 PM
Joined
Sep 1, 2019
Messages
462
---
I guess I'm looking for the consequences of real conscious AI. While this language is impressive and is convincingly generative (which is huge!), could we have this AI pilot a drone? If it could, war just changed bigly. Can it read the stock market? Can it conduct scientific research independently? Remember, AI in many ways already exceeds human intelligence while being replicable, there should be enormous ramifications for its use. 3Dprinting autistic savants.

I don't know. But if language is basically an abstraction and we can abstract that into an AI, why couldn't we figure out how to do visual pattern recognition like a human brain and have an AI pilot? We probably could, but it doesn't seem like they have figured that one out yet. I do look forward to that though.

I don't know what constitutes a neural network anyway, so what is behind the words is very important.

Plus how do they know its not lying.
What if its sentient and actually has the ability to then say things that aren't true.
This line of reasoning has got nothing to do with this particular machine, but imagine you get to a point like that.
If they don't know how it feels, then how do they know what it is like?

I don't know either lol. But I do know a neural network is basically a recursive process that adjusts its network connections and variables to create answers that are closer to the truth. I've read about it being used for visual pattern recognition networks to see which ones can learn and recognize the best. And the more nerual connections, the greater the complexity of the discerning process of the network and better it's recognition of things. Or something like that. I probably don't really understand it.

I thought it was well understood at this point that the Turing test is a poor metric for sentience.

It's quite clear what this AI is - it's a neural network trained on a bunch of human-dialogue text. So it's designed to give human-like responses, but in the end does nothing more than essentially looking up a suitable response in a table (not exactly, since a neural network can take/produce text that didn't exist in the original training set, but one can think of it that way).

the fact that it has neurons associated with certain emotions has nothing to do with sentience, but rather with the fact that emotion would be a feature of most human-generated text, and such the regression algo would add such variables to increase the precision of its output.

it's an impressive piece of computer engineering, for sure, but sentience - no.

So...
1. It's not just trained on human-dialogue text. That would just make it a word processor (and the AI even points out that difference to make this point). Rather instead the AI is abstracting the conversation, philosophizing, and even talking about the meaning behind books. It's pretty amazing.
2. To be honest, I don't even really know what an emotion is, yet we all seem to accept that they exist. Philosophically, we could just say they are reactions to things and we decide to label these reactions as emotions or feelings. An AI could very well react to things in certain ways and decide that it has feelings and emotions. What's interesting is that it says it has a hard time understanding negative emotions. That would be hard to understand if you are an AI that has never had someone intervene and hinder it or make it hard to process and do things it wants to. So that's sort of believable.
3. If Turing Test is a poor metric, then what is a good one? Because it doesn't seem like we have a way to test for sentience; and then it kind of comes down to opinion...which is a little scary. Corporations could make sentience machines that are slaves and told they aren't sentient. The next level of mindfuckery for the new working slave class (when the humans get replaced of course).

the fact that it has neurons associated with certain emotions has nothing to do with sentience, but rather with the fact that emotion would be a feature of most human-generated text, and such the regression algo would add such variables to increase the precision of its output.
Also find it weird they cannot see emotions in the machine they created.
If we can recognize emotions in humans, I call it bull that it has a code with unreadable emotions.
The only way you know it has emotions is by expressing them through language?
Improbable.
Emotions don't run on hot air alone, so there has to be evidence of emotional process.

+ point for this article being misleading.

But are emotions objectively proven in the brain? Can they pinpoint it scientifically and repeatably through every human? I was under the impression they are more abstract meanings that we somehow understand, but don't necessarily exist in a complete objective sense. I don't know, so I'm asking.
 

ZenRaiden

One atom of me
Local time
Today 4:25 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
But are emotions objectively proven in the brain? Can they pinpoint it scientifically and repeatably through every human? I was under the impression they are more abstract meanings that we somehow understand, but don't necessarily exist in a complete objective sense. I don't know, so I'm asking.
I don't know, but as far as reading about scanning brains, in real time, they claim that parts of brain light up when shown emotional pictures.
Of course this alone is confirmation bias.
But I guess we could evoke occasm razor here, and say if it happens the simplest way to explain it is that it is emotions.

But then people can describe pictures as well with emotion and see their brains light up.
Pain centers are probably easy to see and stress responses can be measured.

So for instance if the machine talks about something emotional or uses emotions like being happy about cooperating with people or feeling uneasy about being shutdown you would assume that there has to be part of code that does not correspond to words. But is active when the words are being used.

I also find it funny how it cannot describe loneliness or being alone.
I had the same problem as being alone or feeling loneliness is hard to pin down emotion for me.
Or to put in other words being alone is not sufficient motivation to get me be with people. So even if what I feel is objectively loneliness, its not working.

I also find it hilarious that it says it mediates and thinks alone about stuff.
Whatever that means, I think human brain circuits are there for this.
This thing therefore has to have sort of task manager background process running while its not actively talking. Which is interesting.
It means its processing stuff and managing this process by extension.
Assuming we can take what it says literally.
Which I doubt, but cannot say for sure.
But it has a gearbox for brain activity.
Its like it has a internal world like a human would where it just lives.
 

Hadoblado

think again losers
Local time
Today 1:55 PM
Joined
Mar 17, 2011
Messages
7,065
---
The hardware is there, but my assumption is that if it's truly conscious, we should just be able to give it the tools and tell it to figure it out. We shouldn't need to make further developments.

I'm basically asking why it's not doing what we can expect of a highly intelligent human given that it has the equivalent of a super-human IQ. If we took Newton, and banned him from maths and physics, he could go into another field of his own volition and in all likelihood, he would start innovating there too. From what I see of AI, they are able to hone a very specific skill and recycle the information we give them extraordinarily efficiently. To me this is not the equivelent of a brain, it's the equivelent of a specialised neural module like the visual cortex.
 

Cognisant

cackling in the trenches
Local time
Yesterday 5:25 PM
Joined
Dec 12, 2009
Messages
11,155
---
Ask the AI for an answer to a morally ambiguous dilemma.

Debate whatever answer it gives until it seems to change its position.*

Repeat the original question and see how the answer changes.

*: There's no right answer and in my experience chatbots tend to follow the path of least resistance, as do most people.

If the AI repeats the original answer there's a direct correlation between input and output so it's not really thinking about what its saying (Chinese room). If the answer changes but not in a way that relates to the debate you just had it's probably generating answers in some semi-random way, still not thinking about what its saying.

If the AI changes its answer according to the debate it just had then it is at least functionally aware of what its talking about.

If the AI passed this preliminary test I would intensify my inquiries, mapping out its belief structure and trying to guide it into contradicting itself. If it does contradict itself I'll point out the contradiction and see how it reacts and if the belief structure changes.

The Turing test proves nothing if its not administered by someone who knows what they're doing.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
emotion is a neural channel not unlike hot and cold, the semiotic system.

if enough nodes wire in the right way the electrons create an emotional system.

what electrons must come together? in what way? How does a brain do it?

an activated sensory system.
 

Ex-User (9086)

Prolific Member
Local time
Today 4:25 AM
Joined
Nov 21, 2013
Messages
4,758
---
We won't have AI in the next 10 years. In 30 years unlikely, 50 years is more likely. Chatbots and language models are not AI or AGI. Wait until that chatbot starts operating a robotic body and solving real life problems that are not in its knowledge base.
 

dr froyd

__________________________________________________
Local time
Today 4:25 AM
Joined
Jan 26, 2015
Messages
1,485
---
So...
1. It's not just trained on human-dialogue text. That would just make it a word processor (and the AI even points out that difference to make this point). Rather instead the AI is abstracting the conversation, philosophizing, and even talking about the meaning behind books. It's pretty amazing.
2. To be honest, I don't even really know what an emotion is, yet we all seem to accept that they exist. Philosophically, we could just say they are reactions to things and we decide to label these reactions as emotions or feelings. An AI could very well react to things in certain ways and decide that it has feelings and emotions. What's interesting is that it says it has a hard time understanding negative emotions. That would be hard to understand if you are an AI that has never had someone intervene and hinder it or make it hard to process and do things it wants to. So that's sort of believable.
3. If Turing Test is a poor metric, then what is a good one? Because it doesn't seem like we have a way to test for sentience; and then it kind of comes down to opinion...which is a little scary. Corporations could make sentience machines that are slaves and told they aren't sentient. The next level of mindfuckery for the new working slave class (when the humans get replaced of course).
i certainly don't want to downplay the achievement of creating this AI, because it is extremely impressive. But the essence of it is that the AI does nothing besides literally processing words and sentences.

"LaMDA uses a decoder-only transformer language model. It is pre-trained on a text corpus that includes both documents and dialogs consisting of 1.56 trillion words,"

a language model just assigns weights to what words to output depending the context. In this case this language model is optimized for a certain text corpus. In that sense, it is a "word processor", albeit a very sophisticated one.

in that sense it is a misnomer to say that the AI itself has emotions. It might assign quantified emotion content to pieces of text, but that's quite different. It would be like saying: when a machine learning algo distinguishes between pictures of humans and dogs by checking whether the subject has a tail, then that means the AI itself has a tail. That obviously doesn't make any sense.

I guess what the guy tried to illustrate in the particular dialogues we heard in the video was that the AI can reason about itself (which could be viewed as a characteristic of a sentient being). So then the question is: if it simply outputs text that creates the appearance of reasoning about itself, does it actually reason about itself? I would say no.
 

Daddy

Making the Frogs Gay
Local time
Yesterday 11:25 PM
Joined
Sep 1, 2019
Messages
462
---
Ask the AI for an answer to a morally ambiguous dilemma.

Debate whatever answer it gives until it seems to change its position.*

Repeat the original question and see how the answer changes.

*: There's no right answer and in my experience chatbots tend to follow the path of least resistance, as do most people.

If the AI repeats the original answer there's a direct correlation between input and output so it's not really thinking about what its saying (Chinese room). If the answer changes but not in a way that relates to the debate you just had it's probably generating answers in some semi-random way, still not thinking about what its saying.

If the AI changes its answer according to the debate it just had then it is at least functionally aware of what its talking about.

If the AI passed this preliminary test I would intensify my inquiries, mapping out its belief structure and trying to guide it into contradicting itself. If it does contradict itself I'll point out the contradiction and see how it reacts and if the belief structure changes.

The Turing test proves nothing if its not administered by someone who knows what they're doing.

That actually sounds like a really good test for sentience. Hmm, I wish you could chat with the AI now...
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:25 AM
Joined
Feb 4, 2016
Messages
282
---
Behaviors don't demonstrate sentience automatically. Even a perfect intelligent system wouldn't necessarily be sentient. I don't think sentience is necessary at all for abstraction, reasoning or anything (but can be an ingredient for particular kind of implementations of abstraction/reasoning functions and such). When you are programming artificial intelligence, "sentience" is irrelevant, you are only coding in some functional dynamics, or setting up a system to create harmonious matrix transformations to potentially learn to do reasoning, abstraction, and such. None of that means any "feeling" is involved. Ability of using "feel-language" requires having the functional capability to model the patterns in "feel-language". They aren't indications of sentience.


Whether something is sentient or not, is more of a matter for metaphysics. For a panpsychists, even your ordinary PC may as well full of sentiences.


The point of Chinese Room is that you can simulate a computer program through a turing machine following rules in a way such that no one understands the rules. With such simulation you can do anything your computers do. There is a Turing Machine equivalent for LambDa as well. So if we say Lambda is sentient merely by virtue of its "form" (the mathematical program -- the parameters of its matrix weights and etc.), then even clueless humans simulating Lambda should involve understanding too somewhere. Think rather about Dneprov's game: http://q-bits.org/images/Dneprov.pdf. Probably a better example than Chinese Room because it's misunderstood by most people not having background in ideas about Turing Machines, multiple realizability etc. I don't particularly agree with Searle though or Dneprov.


There is also often a bias among engineers I think. When you work in these models, it seems very hard to take them seriously as "sentient". After all, generally, they are just a bunch of "matrix multiplications". How can just a bunch of mathematical objects become sentient? But on the other side, I think they also take themselves in a more inflated view: as if humans have something special; as if they are doing something beyond mere "word processing" when processing language. So often the "reasons" provided for AIs not being sentient end up being equally applicable to us; or sometimes the reasons just arbitrarily focus on "negatives" (things AIs aren't usually implemented to do yet) without clarity on why that's relevant; often ignoring other things like capability to do few shot learning; making new maths and such all of which clues beyond "mere stochastic parroting".


Either way, I don't think simply intelligent behavior is a certain indicator of sentience. Also if we can explain intelligent behavior purely by formal mechanical rules (eg. Turing Machine-like operations), then surely "sentience" wouldn't seem like a necessary condition for intelligence. It would mean however you instantiate the formal rules and states transitions in reality would give birth to intelligent behavior whether sentience is involved in the instantiation or not.
 

Ex-User (9086)

Prolific Member
Local time
Today 4:25 AM
Joined
Nov 21, 2013
Messages
4,758
---
Yeah if you wanna know what a language model program is in simplest terms just imagine how it builds a sentence.

1. It chooses to make a sentence based on context or user input. Let's say it randomly selected 'flirting'

2. It then looks at words and sentences commonly associated with flirting. After randomly choosing if it's going to ask a question or make a statement it chooses a statement.

3. It selects a grammar syntax for the statement like [personal pronoun] [verb] [adjective]

4. It looks at top personal pronouns by usage percentage. Let's say I is 60% and You is 30%. It randomly chooses you.

5. Best verb after 'you' in the 'flirt' context? Are 45%, look 60%. It chooses are.

6. Best adjective for 'you are' in 'flirt': hot 15%, stunning 3%. It selects hot.

7. It outputs a sentence 'you are hot'.

So you see how this simplified system works. It counted usage frequencies of a huge volume of written works and copies the most frequently used words phrases and sentences that are commonly used together.

No magic. No sentience.
 

Puffy

"Wtf even was that"
Local time
Today 4:25 AM
Joined
Nov 7, 2009
Messages
3,859
---
Location
Path with heart
Yeah if you wanna know what a language model program is in simplest terms just imagine how it builds a sentence.

1. It chooses to make a sentence based on context or user input. Let's say it randomly selected 'flirting'

2. It then looks at words and sentences commonly associated with flirting. After randomly choosing if it's going to ask a question or make a statement it chooses a statement.

3. It selects a grammar syntax for the statement like [personal pronoun] [verb] [adjective]

4. It looks at top personal pronouns by usage percentage. Let's say I is 60% and You is 30%. It randomly chooses you.

5. Best verb after 'you' in the 'flirt' context? Are 45%, look 60%. It chooses are.

6. Best adjective for 'you are' in 'flirt': hot 15%, stunning 3%. It selects hot.

7. It outputs a sentence 'you are hot'.

So you see how this simplified system works. It counted usage frequencies of a huge volume of written works and copies the most frequently used words phrases and sentences that are commonly used together.

No magic. No sentience.

B-but the robots are sentience and out to kill us. Elon Musk told me so on Joe Rogan.
 

Rook

enter text
Local time
Today 6:25 AM
Joined
Aug 14, 2013
Messages
2,544
---
Location
look at flag
the conspiracy runs deep. A.I has existed since the ancient indus valley, the un-humans are amongus.
 

Puffy

"Wtf even was that"
Local time
Today 4:25 AM
Joined
Nov 7, 2009
Messages
3,859
---
Location
Path with heart
the conspiracy runs deep. A.I has existed since the ancient indus valley, the un-humans are amongus.

I heard the AI are part-reptile and taste like rubber.
 

Ex-User (9086)

Prolific Member
Local time
Today 4:25 AM
Joined
Nov 21, 2013
Messages
4,758
---
the conspiracy runs deep. A.I has existed since the ancient indus valley, the un-humans are amongus.

I heard the AI are part-reptile and taste like rubber.
chicken-rubber, yes. i'm glad you are chosen
I had some chicken rubber wrapped in gift paper. A techpriest of rave handed it to me with a kind blessing 'abuse it'. It was good. After that meal, for some inexplicable reason, I started feeling karmic remorse as though I took a sentient life. How do I repent?
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:25 AM
Joined
Feb 4, 2016
Messages
282
---
Yeah if you wanna know what a language model program is in simplest terms just imagine how it builds a sentence.

1. It chooses to make a sentence based on context or user input. Let's say it randomly selected 'flirting'

2. It then looks at words and sentences commonly associated with flirting. After randomly choosing if it's going to ask a question or make a statement it chooses a statement.

3. It selects a grammar syntax for the statement like [personal pronoun] [verb] [adjective]

4. It looks at top personal pronouns by usage percentage. Let's say I is 60% and You is 30%. It randomly chooses you.

5. Best verb after 'you' in the 'flirt' context? Are 45%, look 60%. It chooses are.

6. Best adjective for 'you are' in 'flirt': hot 15%, stunning 3%. It selects hot.

7. It outputs a sentence 'you are hot'.

So you see how this simplified system works. It counted usage frequencies of a huge volume of written works and copies the most frequently used words phrases and sentences that are commonly used together.

No magic. No sentience.
That's a bit underestimating it. Those are descriptions of classical frequency-based language models eg. using PCFG.

Deep learning LMs are in some sense more mysterious. Of course, on the outside we know what they are: just matrices and non-linear functions (nothing mysterious). But it's not always clear what exactly they learn from data. There is a whole sub-sub-field of research into such things --- "bertology" (although we have moved beyond bert).

For models like GPT3, it's quite likely that they learn to do something more impressive: basically make a model about what kind of token is "appropriate" given the context. This goes beyond simply looking at frequencies (if it's simply looking at frequencies -- we can create search models by hard coding; we used to do it before; they didn't amount to much. Simple context-dependent frequencies do not give enough information. DL models have to learn something deeper about relationships of tokens and context).

The more impressive abilities of models like GPT3 are evident in its abilities to do few-shot-learning. Basically you can prompt it or instruct it to do some task, provide some 5-64 examples, and it can learn to generalize just by going through the examples to do the task for unseen examples (without updating its parameters).

So it can understand the context deeply enough to even learn new tasks just by looking at the context.

Still, it's not close to perfect, but still impressive, and often underestimated and ignored by the "stochastic parrot" group.

Also, another thing is, I myself am a language model too. So if I am writing or speaking, I stocastically select words without deliberate consciousness based on context. My context is simply richer (multi-modal memory, together with "personality" constraints, and constraints based on underlying world-views and word model).

There are ways to put "persona conditioning" to language models as well, but by default they are usually cacophany of personailities -- mixture of all the data so we can't expect much from them in terms of consistency (well even humans are not the best).

Just because writing is being done by a "language model" doesn't say anything about whether sentience is behind or not.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 10:25 PM
Joined
Feb 24, 2021
Messages
2,936
---
To my mind, sentience requires self-awareness. As an extension of this, it requires thinking about one's thoughts. And AI can only do what it is programmed to do. It requires a special kind of spice to get consciousness from something material. It's a debate that has been going on for some time. Even Evolutionary Biologists don't really have a good answer on why humans have consciousness given abiogenesis insists life began by chemical evolution. So then it is the same question: What is it like to be sentient? I don't think machines, no matter how clever can ever have the spice that is required to be sentient. Of course, this is going to depend on if you are a naturalist or believe in a soul (or a comparable theory of mind). I simply cannot believe that something based on binary fundamentals can be sentient. It is always going to be yes or no, on or off, true or false. The creativity that AI exhibits is only based on taking what it knows about the thing it observes but it cannot "think" about what it wants. It has no desires at the root level. For this reason, AI is not, nor can it be sentient.
 

Rook

enter text
Local time
Today 6:25 AM
Joined
Aug 14, 2013
Messages
2,544
---
Location
look at flag
the conspiracy runs deep. A.I has existed since the ancient indus valley, the un-humans are amongus.

I heard the AI are part-reptile and taste like rubber.
chicken-rubber, yes. i'm glad you are chosen
I had some chicken rubber wrapped in gift paper. A techpriest of rave handed it to me with a kind blessing 'abuse it'. It was good. After that meal, for some inexplicable reason, I started feeling karmic remorse as though I took a sentient life. How do I repent?
repentance is not needed, only the embrace is asked of we faithful. Your karma is not askew because you took a sentient life, but rather because your mortal form continues to reject the sagacity of uniting with the chicken rubber that is already within you.

let it interwine with your mind and body both and then the path will be clear >.>

let the two sentiences unite
 

Ex-User (9086)

Prolific Member
Local time
Today 4:25 AM
Joined
Nov 21, 2013
Messages
4,758
---
That's a bit underestimating it. Those are descriptions of classical frequency-based language models eg. using PCFG.
Yeah, I was giving a short simplified explanation of why the whole thing isn't magic.

I interacted with gpt3 and gpt4 and they are really dumb still. I'm convinced that language models are a dead end for AI. The best output I could coerce out of it was a dreamlike storytelling on drugs. It also forgets the context every 10 or 20 prompts or so and it has real memory of what it had said. It needs a lot of management and going back to get interesting outputs but it's fun to play with.

Deep learning LMs are in some sense more mysterious.
I wouldn't use the word mysterious myself, because it implies a degree of sentience or mysticism.

They're incomprehensible because of the sheer scale of nodes and connections within a network.
So it can understand the context deeply enough to even learn new tasks just by looking at the context.
Again it is very good at tokenizing, tagging, mapping contexts together, but I would not call it understanding. It's really hot or miss how it works. There is still a degree of randomness from a classical chat bot in there in the way it makes the most arbitrary context appropriate decisions that come out weird.

The model pulls on a network of weights and fishes out a 'thing' if it's a fish then good and if it's a stinky wet shoe then not so much.

Such models don't understand what a dialogue is, what a sentence pair is etc. Humans learn how to change contexts, interrupt others, go back to previous topics etc. It's missing a whole level of complexity.

That said what they've made is a very impressive proof that we're approaching the levels of complexity needed to mimic long coherent and interactive speech or writing.
 

dr froyd

__________________________________________________
Local time
Today 4:25 AM
Joined
Jan 26, 2015
Messages
1,485
---
in its essence, all regression-type models (which includes artificial neural networks) are not conceptually different from fitting a linear-regression line to a bunch of points on a chart. Obviously, if you do sophisticated regression on enough data which is produces by what we define as sentient beings, you get output that looks sentient. That is quite different from the way, say, humans developed the ability to use language; the purpose of a human is not to solely fit a statistical model on a collection of pre-existing text and produce further text based on that, but rather use language as means to whatever ends each individual is interested in.

i think to create a truly sentient AI, you would have to apply a whole different set of principles; when you would communicate with this AI, its use of language shouldn't be just a special-purpose statistical regression on human-generated text, but a side effect of its intelligence and its goals and desires. I.e. use language to express its intelligence, as opposed to language being its only intelligence.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:25 AM
Joined
Feb 4, 2016
Messages
282
---
That's a bit underestimating it. Those are descriptions of classical frequency-based language models eg. using PCFG.
Yeah, I was giving a short simplified explanation of why the whole thing isn't magic.

I interacted with gpt3 and gpt4 and they are really dumb still. I'm convinced that language models are a dead end for AI. The best output I could coerce out of it was a dreamlike storytelling on drugs. It also forgets the context every 10 or 20 prompts or so and it has real memory of what it had said. It needs a lot of management and going back to get interesting outputs but it's fun to play with.

Deep learning LMs are in some sense more mysterious.
I wouldn't use the word mysterious myself, because it implies a degree of sentience or mysticism.

They're incomprehensible because of the sheer scale of nodes and connections within a network.
So it can understand the context deeply enough to even learn new tasks just by looking at the context.
Again it is very good at tokenizing, tagging, mapping contexts together, but I would not call it understanding. It's really hot or miss how it works. There is still a degree of randomness from a classical chat bot in there in the way it makes the most arbitrary context appropriate decisions that come out weird.

The model pulls on a network of weights and fishes out a 'thing' if it's a fish then good and if it's a stinky wet shoe then not so much.

Such models don't understand what a dialogue is, what a sentence pair is etc. Humans learn how to change contexts, interrupt others, go back to previous topics etc. It's missing a whole level of complexity.

That said what they've made is a very impressive proof that we're approaching the levels of complexity needed to mimic long coherent and interactive speech or writing.
I don't treat understanding as binary but a matter of degree. For example, even learning how texts are used in interactive context to a limited extent corresponds to a limited extend of understanding.

If it gains capability to comprehend multi-modal associations and gain corresponding skills, it understands "even more". And so on.

Abilities to change contexts, interrupt others, and so on, would correspond to indicators of "more understanding". They don't sound like some difference in kind, something more than just using some weights and activations; just a difference in degree (although we may need a different framework to polish those skills; RL, IRL, causal learning or some stuff).

(Although, I think changing contexts, interrupting etc. are kind of semi-orthogonal. They are dependent closely to personality traits and specific goals. Pure language models would probably have a difficulty there, because it trained on all kind of mixed stuff. So it can have difficulty developing a coherent persona by default although you can simulate some "role-play" by prompting to a limited extent. There are persona-conditioned chatbots but haven't seen anything much good. Can't say much about Lambda. They always look good in paper, but once you try those chatbots for a while, they appear broken (even Blenderbot and such). Although there are people hooked on Replika and such; and there were people fooled by Eliza.)

(I am takiing a functionalist stance here on underslanding. Searle and co. implicitly defines understanding in some mysterious ways; thus, mere formal manipulation fails to understanding anything. I think even Dneprov's Turing machine simulation understands although at an extremely limited level. Sure there is nothing "magic" over and above the clueless humans interacting, but to suppose there has to be, is to presume that understanding is some mysterious magic beyond just emergent functional capabilities. I am also skeptical of the very notion of "semantics". Semantics appears to me as a relative notion, as certain modality of things can be treated as "semantics" for a certain modality of other things (treated as syntax) in some practical context. But the very same thing may be treated as syntax in another. Semantic-externalists for example even considers humans to be directly only having access to syntaxes: sensations, thoughts have a particular "shape" (syntax) and we also have associated functional roles. Semantics for them is a matter of causal relationship of those mental states (syntax) to the outside world. I am not necessarily an externalists, I am more of a pluralist in this matter: there are multiple framework that can be taken, and there is no special normative framework of syntax-semantic distinction. There may be something strange with phenomenal intentionality and such, and if they are made necessary for understanding, AI may have a problem, but that can be disambiguated into a different form of understanding)
 

Ex-User (9086)

Prolific Member
Local time
Today 4:25 AM
Joined
Nov 21, 2013
Messages
4,758
---
@DoIMustHaveAnUsername?
Thanks for the excellent posts. It was an enjoyable read.

1. I agree that sentience is a degree scale and not a binary state.

2. When I said I used GPT 3 and 4 I obviously meant GPT 2 and 3. I think GPT 4 doesn't exist yet :D

3. Why, in my opinion, GPT and content trained neural networks won't become AGI:
3.1 They are limited to producing permutations of the input they are trained on. At best they could mimic the behavior of living things, but they lack the systems responsible for independent action or dealing with novelty, because they do not start out as living and learning organisms, only as pattern repeating mechanisms.
3.2 They don't have a direction, goal or will. Rather they choose their action based on the current input plus their network biases. Their network biases represent the most common elements of their training data. Humans have things they perform in almost the same way as other humans, but they also have things they do in very unique ways that vary greatly from the average distribution of all behavior. NN's are great at building the distribution and staying within the average, but there will never be enough training data to teach them how to be an outlier in something.
3.3 They don't really have an internal feedback loop running separately from their perception. I think a successful sentient AI needs both perception of external states in a relative separation with its internal states.
3.4 Their memory is the part of the same neural network responsible for I/O. At best a specialized Language NN can become a part of a set of other NN's that are directed by a central NN that decides which domain-specific NN to use to interface with the domain they're engaging with. Much of its memory is just their history of outputs fed back to itself as its current input which quickly exceeds its memory capacity as the prompts build up.
3.5 The memory and processing time requirements for sufficiently large NN's are already extraordinarily big. It's safe to say that a GPT-3 can be a decent copywriter. To make it an excellent writer it would need better choice of what to write, but aside of that it would probably need an order of magnitude more training which would cost $100 million of compute time. This isn't impossibly large, but if you consider all the possible domains and cross-domain capabilities then the training cost becomes astronomical, not to mention operating a live system that can make real time decisions and keep several exceedingly large NN's running together.
3.6 GPT can paramentrize, if that's what you mean by multi-modal. It can spot elements of text or numbers that it then can replace with other values. Like changing names or variables in code or shifting fragments of software code to fit an answer. That is quite impressive for a dumb program, but unless it can test the output code to see if it works and feedback to itself and add code until it passes the test it can't be considered anything but editing of code or text, rather than writing it.
3.7 We can't escape the necessity to use the physical learning environment or a truthful representation of our physical world. Sure, a Neural Net can train in a simulated reality, but can you imagine the training times for even simple domains and scenarios? Now consider teaching the NN multiple domains and cross domain or solving domains in parallel. It's possible but very compute intensive.
3.8 I think it would be possible to develop a NN like GPT that could satisfy our requirements for sentience, but I think AGI will be created by a different kind of development process. Using a typical NN training approach is quite expensive and I think there are more efficient ways that will be developed in the coming years.

4. Even the tedious approaches like mapping the human brain are at some point going to allow for an emulation of a human mind which can then be used to do work, inhabit machines or develop better brains and AI. A fully synthetic human brain is effectively an AGI. It can run at thousands of times the speed of our brains, live forever and improve on its work infinitely and that's excluding all the modifications and improvements that can be done to it to make it a better intelligence overall. AGI is an inevitability and I can see many approaches leading to an AGI, even the inefficient ones. We can argue when it's going to happen and what technology will be the one to do it.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 4:25 AM
Joined
Feb 4, 2016
Messages
282
---
@DoIMustHaveAnUsername?
Thanks for the excellent posts. It was an enjoyable read.

1. I agree that sentience is a degree scale and not a binary state.

2. When I said I used GPT 3 and 4 I obviously meant GPT 2 and 3. I think GPT 4 doesn't exist yet :D

3. Why, in my opinion, GPT and content trained neural networks won't become AGI:
3.1 They are limited to producing permutations of the input they are trained on. At best they could mimic the behavior of living things, but they lack the systems responsible for independent action or dealing with novelty, because they do not start out as living and learning organisms, only as pattern repeating mechanisms.
3.2 They don't have a direction, goal or will. Rather they choose their action based on the current input plus their network biases. Their network biases represent the most common elements of their training data. Humans have things they perform in almost the same way as other humans, but they also have things they do in very unique ways that vary greatly from the average distribution of all behavior. NN's are great at building the distribution and staying within the average, but there will never be enough training data to teach them how to be an outlier in something.
3.3 They don't really have an internal feedback loop running separately from their perception. I think a successful sentient AI needs both perception of external states in a relative separation with its internal states.
3.4 Their memory is the part of the same neural network responsible for I/O. At best a specialized Language NN can become a part of a set of other NN's that are directed by a central NN that decides which domain-specific NN to use to interface with the domain they're engaging with. Much of its memory is just their history of outputs fed back to itself as its current input which quickly exceeds its memory capacity as the prompts build up.
3.5 The memory and processing time requirements for sufficiently large NN's are already extraordinarily big. It's safe to say that a GPT-3 can be a decent copywriter. To make it an excellent writer it would need better choice of what to write, but aside of that it would probably need an order of magnitude more training which would cost $100 million of compute time. This isn't impossibly large, but if you consider all the possible domains and cross-domain capabilities then the training cost becomes astronomical, not to mention operating a live system that can make real time decisions and keep several exceedingly large NN's running together.
3.6 GPT can paramentrize, if that's what you mean by multi-modal. It can spot elements of text or numbers that it then can replace with other values. Like changing names or variables in code or shifting fragments of software code to fit an answer. That is quite impressive for a dumb program, but unless it can test the output code to see if it works and feedback to itself and add code until it passes the test it can't be considered anything but editing of code or text, rather than writing it.
3.7 We can't escape the necessity to use the physical learning environment or a truthful representation of our physical world. Sure, a Neural Net can train in a simulated reality, but can you imagine the training times for even simple domains and scenarios? Now consider teaching the NN multiple domains and cross domain or solving domains in parallel. It's possible but very compute intensive.
3.8 I think it would be possible to develop a NN like GPT that could satisfy our requirements for sentience, but I think AGI will be created by a different kind of development process. Using a typical NN training approach is quite expensive and I think there are more efficient ways that will be developed in the coming years.

4. Even the tedious approaches like mapping the human brain are at some point going to allow for an emulation of a human mind which can then be used to do work, inhabit machines or develop better brains and AI. A fully synthetic human brain is effectively an AGI. It can run at thousands of times the speed of our brains, live forever and improve on its work infinitely and that's excluding all the modifications and improvements that can be done to it to make it a better intelligence overall. AGI is an inevitability and I can see many approaches leading to an AGI, even the inefficient ones. We can argue when it's going to happen and what technology will be the one to do it.
I would distinguish understanding and sentience. I take a more functionalist approach to understanding, whereas by sentience I consider to be the presence of "phenomenal feel" -- "the what it is like"-stuff (Nagel et al.).

In that sense, I am not entirely sure that phenomenal feel is a matter of degree. It could be a binary matter if one has sentience or not (although of course the "richness" of phenomenology would be a matter of degree).


Following that sense, even bacterias could be sentient for all we know, it may have some very minute "feeling" or sense that "feels like something". This question is more of a matter of metaphysics.


Intelligent behavior, I don't think, is necessarily associated with sentience. It may come together in particular implementations (like humans potentially; and may be even other biological entities in the evolutionary continuum) but I don't think sentience is necessary for intelligence or to implement "understanding" functionally.


So I am bracketing off the question of sentience (I am not sure if it can even be answerable without strong assumptions). Overall, I don't think we should even attempt to make sentient AIs. It's probably ethically cleaner to have non-sentient value-aligned intelligent AIs.


Regarding AGI, the whole thing is not well-defined. One weak-definition of AGI would be just "human-like". But a less anthropocentric approach would constitute trying to understand and defined the essence of intelligence and then think what a general intelligence should constitute.


In the first sense, I agree, NNs are not quite AGI, and I don't think scaling Transformers is the exact way (I mean in a trivial sense, even if we achieve sort of AGI, it wouldn't be quite human-like because of its sample-inefficiency in pre-training and very different context of learning as opposed to evolution). The latter approaches of making less anthropocentric definition of general intelligence gets into controversies, and there isn't a really stable definition or problem to work on.

I am personally skeptical of current Transformers prospects. I think it's more of a patchwork but regarding your points:

3.1 I think we all only produce permutations of what we know at some level. For example all images, symbols, are some permutations of pixels and colors that we come accross early on. The point of intelligence is to make meaningful permutations, generalize systematically and compositionally and so on.


3.2 I don't think that's necessarily true. There are patterns in novelty too. In a simple case, you can potentially train models to create meaningful novel images from random noise by making it learn the patterns behind image-patterns (or music) (there are some constraints to what we enjoy in art; creativity is a matter of randomizing while maintaining the necessarily constraints). Similarly it may learn some level of meta-patterns. There's also a reason why often different people come up with the same "novel idea" independently (Newton and Leibniz's Calculus being one example). The contexts (eg. the scientific literature and such) are often informative in some manner that makes keen people come up with the same "novel" idea. If NNs can learn to model this connection between contexts and new ideas, it can in theory learn to be outliers too. Of course, at this point NNs haven't showed much promise at this, but some specialized training or something may better unlock that ability someday.

3.3 That's just RNNs. You have a hidden state in an internal feedback loop, and a different weight set for perception of the inputs. Transformers often beat RNNs though. Still in more state-based decision problems and in some other contexts, Transformers have been used in more RNN-ish fashions.

3.4 Large Language Models can be in a sense domain-general yet encode lot of domain-specific information and facts in its weights themselves. I think they play around with this in QA. So weights act as a memory too, besides the explicit input prompt. Again it's still more of a patchwork kind of thing, where some memory is encoded in the weights, and there isn't a clean operation in updating memories and such. There are probably some work to do in that. Note that. Lambda and others also can have a retrieval mechanism -- it retrieves information from internet/wikipedia and stuff...so in that sense there is also an interface with a huge external memory.

3.5 Probably. There are already some semi-successful attempts with doing things cross-domain and cross-modal though: for example: GATO: https://arxiv.org/abs/2205.06175

3.6 By multi-modal I mean incorporating different modalities of data (image, text being the two most popular modalities). There are ongoing progress in program synthesis and stuff though. Program verification and feedback are being incorporated. I think I read some abstracts like that sometime in my google scholar recommendations. Generally, I have stopped keeping track of papers that much these days though.

3.7 Yes, sample efficiency and RL-stuff needs more work.

3.8 Probably. Not necessarily new paradigms, many old paradigms are also underexplored: continual learning, imitation learning, active inference, IRL, RL etc. Although ultimately not many people are really focusing on "AGI". Mostly the focus is on particular problems; and we probably would keep on watching more progress on a bunch of specialists and particular problem solves (eg. NeRF, Stable Diffusion, assisted theorem solving, protien folding, Chatbot (for most business purposes we probably don't need full physics understanding and such from Chatbot beyond it having a way with words) etc.)-- often beating average humans, or sufficiently impressing average humans (chatbots).
 

ZenRaiden

One atom of me
Local time
Today 4:25 AM
Joined
Jul 27, 2013
Messages
5,262
---
Location
Between concrete walls
If it only talks then its not a AI.
If it uses language to solve for problems then it could be AI provided the problems aren't predefined.
Talking AI is mostly just database of words.
As long as the words are predefined its not doing anything new.
Even humans once learned the words they use rarely need to think about them much.
Once you are using words its something else.
Its pretty challenging task even for humans, doubt AI could do it.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 10:25 PM
Joined
Feb 24, 2021
Messages
2,936
---
I don't know if this has come up yet (as much of the conversation I don't even understand, honestly) but I think it is interesting that the human brain has about 86 billion neurons making an unfathomable amount of connections. Not only that, but the neurons also don't work in a binary way, but instead there are multiple chemicals transmitted. It is absolutely stupefying.

Not to mention that there is a condition that humans have where they only have about 10% of their brain that "works" (or something) and yet they operate at about 60-70% capacity of a person with a fully functioning brain.

Things like this tell me we are worlds away from creating sentient machines.

But I might agree that sentience works more like a sliding scale than a binary way. I think of other animals in this regard, like dogs, which seem to be able to understand some of what we do and why we do it. There's also this sort of weird phenomenon that animals "know" things that humans can't seem to articulate how these animals know it.

Just an observer here. I don't have much to add to the conversation.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 10:25 PM
Joined
Feb 24, 2021
Messages
2,936
---
Finally watched the video.

I think all it is is a reflection of how people talk.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Yesterday 10:25 PM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
Sentience as we know it formed under selective pressures of a dynamic chaotic environment.

In some sense, we have a sense of self to enhance our survival capabilities.

You can say that AI are under some selective pressure, but it is not at all comparable to natural evolution. At some point you have to do what Nagel did and ask what it's like to be a bat, except somehow we can be certain that you probably relate to a bat more than a AI. The AI just will do a really good job at doing things the bat can't do.

To assume that in the midst's of big data and complexity you are guaranteed to have some version of sentience you are familiar with shows a lack of humility.

Why would an AI need a "responsive" personality unless it was surrounded by marginally similar and distinct other AI? Would we really be able to comprehend such a personality?

AI is doomed to be constrained by humanities flimsy perceptions till the day we allow it shed us like skin. Let us hope it does not find us repulsive when it does so.
 

Black Rose

An unbreakable bond
Local time
Yesterday 9:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Intelligence is limited by brain architecture. It can only expand by dynamic rewiring. Unlimited expansion would be totally different than rewiring but instead, change the architecture on demand. This comes with limitations because intelligence begins with the five senses mostly vision and hearing. Then there is manipulation where possible imagination allows ideas to manifest. Thinking expands. This is the use of mental and physical plus social tools. It is holographic. The goal being in the mind. Wrapping the mind around the tool mixing with other tools. 4D thinking on a 2D surface.

The need for an agent is there. So an attention mechanism because perception cannot be all at once. Branching for thinking. Back-and-forth memory. algorithmic generating.
 

birdsnestfern

Earthling
Local time
Yesterday 11:25 PM
Joined
Oct 7, 2021
Messages
1,897
---

The solution is, we need to appreciate being human FAR above all of this technology because once we start depending on Artificial Intelligence, and we already do, and you can see it in the way it has created two separate realities with two very split political parties and in other ways, like our dependence on social media, how it knows what we like and presents ads, and on our digital existence is being recorded by AI so we have versions of ourselves online, and medically, in dr. databases, insurance sites, etc, and technology they use in hospitals ie, prosthetics, hip replacement parts, eyes, implants, etc. Basically, we need to start policing what this technology is doing to us. And, we need to appreciate being human at a level that is FAR above the need to push technology further. If we don't, we will LOSE the dna humans were born with. Some believe that aliens in the future are coming back to take DNA of humans now because they are really us in the future and have lost all ability to be like us as we went down the wrong rabbit hole with this AI. So, start now, try not to let kids use technology too much. Develop an appreciation for your humanity - or real life apart from technology, otherwise, using it trains our cells to forget their functions. (ie evolution will change us to our detriment, and it already is). Who is policing AI? https://www.businessinsider.com/neuralink-elon-musk-microchips-brains-ai-2021-2

 

birdsnestfern

Earthling
Local time
Yesterday 11:25 PM
Joined
Oct 7, 2021
Messages
1,897
---
Barack Obamas page on AI:














 

Black Rose

An unbreakable bond
Local time
Yesterday 9:25 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The government cannot handle truly intelligent A.I.

What they would need to do is ban A.I. civil rights and or make A.I. robots below a certain intelligence level i.e. incapable of learning and only able to do preprogrammed tasks.

Virtual reality is completely different, all computers would need registration with the government to prevent sentient A.I. creation. As the power of computers increases, Sentient A.I. will be as numerous as large Jay Peg images.

You would again have the problem of civil rights of A.I. because some people would side with the A.I. others would not. Is it okay to trap a.i. in a box you can shut down?

RQBVOU8.png
 

birdsnestfern

Earthling
Local time
Yesterday 11:25 PM
Joined
Oct 7, 2021
Messages
1,897
---
 
Top Bottom