• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Chinese Room

Reluctantly

Resident disMember
Local time
Yesterday 9:16 PM
Joined
Mar 14, 2010
Messages
3,135
-->
https://en.wikipedia.org/wiki/Chinese_room

Couldn't find this on here. But I came across it and want to see what people think.

Is a computer that translates consciousness from a set of rules, procedures, logic, and/or directives conscious? Is it any different from a human being?



My reasoning is that any conscious being is conscious by virtue of having impulses and simultaneously having some awareness of them and a hyper-rationality to channel or direct those impulses.

A computer that has no impulses does not have real consciousness, even if it passes the Turing Test. It simply translates consciousness as best it can, but does not actually feel conscious, without its own impetus.
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
(A1) "Programs are formal (syntactic)."
A program uses syntax to manipulate symbols and pays no attention to the semantics of the symbols. It knows where to put the symbols and how to move them around, but it doesn't know what they stand for or what they mean. For the program, the symbols are just physical objects like any others.
(A2) "Minds have mental contents (semantics)."
Unlike the symbols used by a program, our thoughts have meaning: they represent things and we know what it is they represent.

What Seal is saying is that semantics is a spiritual force much like pixie dust and magic beans. First of all the brain does not have semantics i.e. magic pixie dust. What is has is spiking signals. Spiking signals are no different than electrical signals on a chip. On a chip, each box is segregated as a separate box from all other boxes. Each brain cell is in this regard also a separate box from all other brain cells. Connections are between the boxes on the chip and connections are between brain cells. The reason a thought represents things is that the connections direct the signals in the brain. Not because of magic fairy dust semantics. A brain cell is just as compartmentalized as a transistor. A chip can direct signals to represent things because directing signals is the important part. Searl is a magical thinker because he thinks he can prove computers cannot think by calling computer signals symbols and calling neuron signals semantics. A signal is a signal and Searl is a complete buffoon for this idea that calling neuron signals "semantics" magically transforms them into something different from computer signals. His argument for semantics is no different than the Catholic belief in transubstantiation. He wants us to believe that it is a miracle that signals in the brain produce semantics well signals in computers are just "symbols" with no Jesus juice to make them alive. I believe that calling one set of signals "semantics" and another set of signals "symbols" is a word game. His argument is a completely fabricated fallacy. As long as signals can be directed in the right way, in a computer or a brain, it will have a mind. Searl is the kind of person that plays word games. Signals are signals. Appeals to pixie dust makes you a degenerate philosopher.
 

~~~

Active Member
Local time
Today 7:16 AM
Joined
Mar 21, 2010
Messages
365
-->
Maybe the Jesus juice cools the machine down so that it can function at a higher level.
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
I hold pretty much exactly this opinion:
Colin McGinn argues that the Chinese room provides strong evidence that the hard problem of consciousness is fundamentally insoluble. The argument, to be clear, is not about whether a machine can be conscious, but about whether it (or anything else for that matter) can be shown to be conscious. It is plain that any other method of probing the occupant of a Chinese room has the same difficulties in principle as exchanging questions and answers in Chinese. It is simply not possible to divine whether a conscious agency or some clever simulation inhabits the room
 

Reluctantly

Resident disMember
Local time
Yesterday 9:16 PM
Joined
Mar 14, 2010
Messages
3,135
-->
What Seal is saying is that semantics is a spiritual force much like pixie dust and magic beans. First of all the brain does not have semantics i.e. magic pixie dust. What is has is spiking signals. Spiking signals are no different than electrical signals on a chip. On a chip, each box is segregated as a separate box from all other boxes. Each brain cell is in this regard also a separate box from all other brain cells. Connections are between the boxes on the chip and connections are between brain cells. The reason a thought represents things is that the connections direct the signals in the brain. Not because of magic fairy dust semantics. A brain cell is just as compartmentalized as a transistor. A chip can direct signals to represent things because directing signals is the important part. Searl is a magical thinker because he thinks he can prove computers cannot think by calling computer signals symbols and calling neuron signals semantics. A signal is a signal and Searl is a complete buffoon for this idea that calling neuron signals "semantics" magically transforms them into something different from computer signals. His argument for semantics is no different than the Catholic belief in transubstantiation. He wants us to believe that it is a miracle that signals in the brain produce semantics well signals in computers are just "symbols" with no Jesus juice to make them alive. I believe that calling one set of signals "semantics" and another set of signals "symbols" is a word game. His argument is a completely fabricated fallacy. As long as signals can be directed in the right way, in a computer or a brain, it will have a mind. Searl is the kind of person that plays word games. Signals are signals. Appeals to pixie dust makes you a degenerate philosopher.

Well, to be fair, I think he is on to something, at least in terms of meta-understanding. A translator has no meta understanding; it simply translates. A human mind might be a kind of computer with inputs and outputs akin to an electrical circuit, but it does have a lot of associative processing that puts a kind of qualia behind our representation or understanding of things.

I hold pretty much exactly this opinion:

Isn't that like saying there is no difference then?
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
Isn't that like saying there is no difference then?
It seems there would be a difference between a human understanding Chinese and someone (or something) carrying out a procedure which emulates understanding Chinese, but it also seems that before answering whether there is a difference, one has to know
1) How exactly do you distinguish between those two as an outside observer
2) What does it mean to "understand" something and what exactly is consciousness. Consciousness itself doesn't seem to be a clearly defined concept.
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
What Seal is saying is that semantics is a spiritual force much like pixie dust and magic beans. First of all the brain does not have semantics i.e. magic pixie dust. What is has is spiking signals. Spiking signals are no different than electrical signals on a chip. On a chip, each box is segregated as a separate box from all other boxes. Each brain cell is in this regard also a separate box from all other brain cells. Connections are between the boxes on the chip and connections are between brain cells. The reason a thought represents things is that the connections direct the signals in the brain. Not because of magic fairy dust semantics. A brain cell is just as compartmentalized as a transistor. A chip can direct signals to represent things because directing signals is the important part. Searl is a magical thinker because he thinks he can prove computers cannot think by calling computer signals symbols and calling neuron signals semantics. A signal is a signal and Searl is a complete buffoon for this idea that calling neuron signals "semantics" magically transforms them into something different from computer signals. His argument for semantics is no different than the Catholic belief in transubstantiation. He wants us to believe that it is a miracle that signals in the brain produce semantics well signals in computers are just "symbols" with no Jesus juice to make them alive. I believe that calling one set of signals "semantics" and another set of signals "symbols" is a word game. His argument is a completely fabricated fallacy. As long as signals can be directed in the right way, in a computer or a brain, it will have a mind. Searl is the kind of person that plays word games. Signals are signals. Appeals to pixie dust makes you a degenerate philosopher.
To me it seems that this type of reasoning amounts to saying: since we don't know of any particular structure in the brain that produces consciousness, and the only way we have been able model the brain is via a network performing computations, then it must be that networks performing computations produce consciousness. This is obviously erroneous reasoning, and moreover, there has never been an artificial computational network which was conscious – thus there seems to be no reason to believe that a computational network is all you need to produce consciousness.
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
To me it seems that this type of reasoning amounts to saying: since we don't know of any particular structure in the brain that produces consciousness, and the only way we have been able model the brain is via a network performing computations, then it must be that networks performing computations produce consciousness. This is obviously erroneous reasoning, and moreover, there has never been an artificial computational network which was conscious – thus there seems to be no reason to believe that a computational network is all you need to produce consciousness.

That is no reason to "assume" a network is what is only needed but there is more reason to think a network is sufficient because we can see that thinking entities (humans) have them rather than not. What is highly speculative is that there is some extra ingredient needed other than a network to produce a mind. (quantum physics?)
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
That is no reason to "assume" a network is what is only needed but there is more reason to think a network is sufficient because we can see that thinking entities (humans) have them rather than not. What is highly speculative is that there is some extra ingredient needed other than a network to produce a mind. (quantum physics?)
I don't see how it is speculative at all. After all, as mentioned, there has never been a conscious artificial network. And as we discussed in some other thread, most of the brain is not made of neurons, but glial cells, whose function is not really understood.
 

TheManBeyond

Banned
Local time
Today 7:16 AM
Joined
Apr 19, 2014
Messages
2,850
-->
Location
Objects in the mirror might look closer than they
basically this guy says computers lack intuition (as i see that void in my head which makes me feel self aware and so on) while humans simply lack speed.
well big deal, this guy didn't discover america.
now, the question is, what do we need to survive? and what can't we discard as we evolve?
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
I don't see how it is speculative at all. After all, as mentioned, there has never been a conscious artificial network. And as we discussed in some other thread, most of the brain is not made of neurons, but glial cells, whose function is not really understood.

What you are saying is that something other than brain matter is required for a mind to exist. I have never heard any proposal of what the secret ingredient is that gives that brain semantics. This is not about whether an artificial network has been made that is conscious, this is about moving the goal post and not explaining the secret ingredient. What is the "something extra" apart from brain matter that Searl claims is required for a mind to exist? He is completely silent on what that secret ingratiate is other than quantum physics. (but fails to mention/realize computer chips obey quantum physics just as the brain does)

Basically, we cannot say computers lack the secret ingredient unless we know what it is and the relation it has to artificial networks made of matter that prevent computers from having what the brain has (semantics).

If the secret ingredient does not exist the objection to artificial networks having semantics goes away. (you need a new objection)
 

redbaron

irony based lifeform
Local time
Today 6:16 PM
Joined
Jun 10, 2012
Messages
7,253
-->
Location
69S 69E
Animekitty said:
Basically, we cannot say computers lack the secret ingredient unless we know what it is

wrong
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama

it's wrong because why?

how do you know the secret ingredient exists?

edit:

I specifically made that statement to show that if the brain indeed has a secret ingredient that computers lack, then we need evidence that shows that the secret ingredient exists in the first place before making definitive statements on why a computer would lack it. The nature of this secret ingredient (if it exists) is currently unknown, therefore we do not know if compatibility is possible between it a computer. My statement is not a stand alone argument, it was made under the premise that understanding the nature of secret ingredients are necessary for refuting Searls argument and not that they exist or that I believe they exist.
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
I don't understand your reasoning, AK. How can you assume a-priori that a computational model is sufficient, and say: unless you find some specific component to consciousness which is not included in the model, then the model definitely produces consciousness?
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
I don't understand your reasoning, AK. How can you assume a-priori that a computational model is sufficient, and say: unless you find some specific component to consciousness which is not included in the model, then the model definitely produces consciousness?

I never said that it definitively does. My position is that if the difference between a brain and a computer is non-existent then they are equivalent. If a can of beans weights ten ounces and a can of corn weighs ten ounces then I will believe gravity effects both in an equivalent way, this is not an apriori. (both brains and computers follow the laws of physics) In regard to there currently being no computer that acts conscious, we can say that just because cans of corn do not exist yet gives us no good reason to think a can of beans has a magical property called semantics that lets gravity(consciousness) interact with it whereas the can of corn that does not exist yet will never have semantic so it can interact with gravity(consciousness). In the future cans of corn may act conscious and Searls argument that they lack semantics will fall short. This is the philosophical zombie's position you hold where you must be agnostic for both humans and computers but for me the secret ingredient argument holds no water. I think that born cans of corn and cans of beans are affected by gravity the same. Why would corn lack semantics but beans not lack them? I am using Occam's razor to say no secret ingredient exists. Therefore I conclude that conscious computers are possible. Why would corn and beans not be affected by gravity(consciousness) the same?
 

~~~

Active Member
Local time
Today 7:16 AM
Joined
Mar 21, 2010
Messages
365
-->
What about if the Chinese Room was programmed with consciousness (i.e. to be aware and responsive to it’s surroundings — or even stronger AI)? (Or in other words a significant variation to the program may result in understanding.) Which could be something akin to the Roboto Argument with a bit of Copeland thrown in. Kurzeil’s 2002 argument isn’t bad either.
 

onesteptwostep

Junior Hegelian
Local time
Today 4:16 PM
Joined
Dec 7, 2014
Messages
4,253
-->
Eh, before even saying that one is not conscious or conscious, we can't exactly define what consciousness is, so basically the question in itself isn't answerable in absolute, objective terms.

But if we were supposedly define what conscious is, I think the definition would start or have 'organic' in it, because consciousness I think made by substances within our natural world- and I mean that in both a natural, biological sense and in a metaphysical sense. I think ultimately, mind is matter, but the event in which matter became the mind is something we'll never be able to recreate.

I also think the problem with calling AI 'conscious' is not so much so that it's a reflection of the creator, but because it has no innate purpose or yearning for survival. I think Reluctant's 'impulse' argument is sufficient enough. I think some things are simply not able to be expressed in binary terms, or in other words, I don't think some questions just have straightforward answers. The morality in the choices we make, I don't think, can be programmable in AI as we know it.
 

TheManBeyond

Banned
Local time
Today 7:16 AM
Joined
Apr 19, 2014
Messages
2,850
-->
Location
Objects in the mirror might look closer than they
computers know no past. we know even when we are not aware of it, its memory is always pushing us to bleed.
computers are a simulation made by shitty gods, through the rules of what we call logic, a way to dwell in reality, safeness.
they have no elements, they lack the chemestry but we... we are part time lovers.
how could they be aware then?
try to give ayahuasca to a computer to see what i mean.

https://www.youtube.com/watch?v=Ll6LLGePYwM
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
I also think the problem with calling AI 'conscious' is not so much so that it's a reflection of the creator, but because it has no innate purpose or yearning for survival. I think Reluctant's 'impulse' argument is sufficient enough. I think some things are simply not able to be expressed in binary terms, or in other words, I don't think some questions just have straightforward answers. The morality in the choices we make, I don't think, can be programmable in AI as we know it.

We know how the limbic system is wired up. Small models have been made of the limbic system in software.
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
I never said that it definitively does. My position is that if the difference between a brain and a computer is non-existent then they are equivalent. If a can of beans weights ten ounces and a can of corn weighs ten ounces then I will believe gravity effects both in an equivalent way, this is not an apriori. (both brains and computers follow the laws of physics) In regard to there currently being no computer that acts conscious, we can say that just because cans of corn do not exist yet gives us no good reason to think a can of beans has a magical property called semantics that lets gravity(consciousness) interact with it whereas the can of corn that does not exist yet will never have semantic so it can interact with gravity(consciousness). In the future cans of corn may act conscious and Searls argument that they lack semantics will fall short. This is the philosophical zombie's position you hold where you must be agnostic for both humans and computers but for me the secret ingredient argument holds no water. I think that born cans of corn and cans of beans are affected by gravity the same. Why would corn lack semantics but beans not lack them? I am using Occam's razor to say no secret ingredient exists. Therefore I conclude that conscious computers are possible. Why would corn and beans not be affected by gravity(consciousness) the same?
You're using the exact same logic again, AK.

We have computational models which mimic certain aspects of the brain. That is true. But from there, you make the leap of saying: since we have something that kinda looks like a brain, you need to tell me exactly which ingredient it is missing, otherwise this thing is exactly like a brain.

This is like making any sort of robot that behaves similarly to a human, and say: with time this thing will become conscious. Why? Because we don't know how consciousness arises...

Don't you see the absurdity and the epistemic arrogance of this reasoning?
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
You're using the exact same logic again, AK.

We have computational models which mimic certain aspects of the brain. That is true. But from there, you make the leap of saying: since we have something that kinda looks like a brain, you need to tell me exactly which ingredient it is missing, otherwise this thing is exactly like a brain.

That is Nowhere what I am saying.
If a robot brain acts like a human brain there is a reason to doubt the secret ingredients existence.
What you are saying is that I must prove a negative.
Your argument is that because we cannot disprove the secret ingredient therefore it exists.
(you cannot disprove Gods existence, therefore, God exist)
That position, if it is your position is fucking horseshit.
You are expecting me to disprove a negative.

This is like making any sort of robot that behaves similarly to a human, and say: with time this thing will become conscious. Why? Because we don't know how consciousness arises...

Don't you see the absurdity and the epistemic arrogance of this reasoning?

Why should we doubt a robots consciousness if it acts consciously just because hypothetical it may lack a secret ingredient that it is speculated only humans have.

(we can doubt that black people have consciousness because hypothetical they may lack a secret ingredient that is speculated only whites have)

Fucking Retarded. (God Damn it, what the fuck are we talking about when we say secret ingredients exist? What the fuck is it supposed to do God fucking Damn it)
 

Ex-User (14663)

Prolific Member
Local time
Today 7:16 AM
Joined
Jun 7, 2017
Messages
2,939
-->
That is Nowhere what I am saying.
If a robot brain acts like a human brain there is a reason to doubt the secret ingredients existence.
What you are saying is that I must prove a negative.
Your argument is that because we cannot disprove the secret ingredient therefore it exists.
(you cannot disprove Gods existence, therefore, God exist)
That position, if it is your position is fucking horseshit.
You are expecting me to disprove a negative.



Why should we doubt a robots consciousness if it acts consciously just because hypothetical it may lack a secret ingredient that it is speculated only humans have.

(we can doubt that black people lack consciousness because hypothetical they may lack a secret ingredient that is speculated only whites have)

Fucking Retarded. (God Damn it, what the fuck are we talking about when we say secret ingredients exist? What the fuck is it supposed to do God fucking Damn it)

I am not saying you have to disprove that computation doesn't produce consciousness. I am saying you don't really have a model for consciousness at all. All your argument amounts to is that an algorithm somehow will produce consciousness if it gets complex enough. How is this different from simply saying: anything that passes the Turing test is conscious?
 

Black Rose

An unbreakable bond
Local time
Today 1:16 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
I am not saying you have to disprove that computation doesn't produce consciousness. I am saying you don't really have a model for consciousness at all. All your argument amounts to is that an algorithm somehow will produce consciousness if it gets complex enough. How is this different from simply saying: anything that passes the Turing test is conscious?

I can answer this by giving an example of my developmental experience. When I was 23 I was mystified by technology, even in high school I ignorant of most things. I was so developmentally immature that I avoided people because I never knew how to interact with them. I was dumb most of the time being unable to speak because my mind was just blank. But now I am able to communicate because I now know how to come up with stuff to say and not have a blank mind.

The biological mechanism of the brain came from the need for self-regulation. The hypothalamus regulates body temperature keeping it at normal levels to not die. There is a regulatory mechanism that determines how the network in the brain changes from the perception-action cycle. This is how the brain organizes itself. The brain is a balancing system for survival so what it learns maintains basic functions and also models how it interacts with the world. The entire brain system has a replica of the world inside it and everything it can do that affects the world. The more interaction with the world a brain has, the more complex the network becomes. So in essence development is the complexification of a brains ability to model and interact with the world.

Simply put my brain has become more complex than when I was 23 years old. I am more self-aware of my impact on the world. That means that I myself know that I can change my environment. I am aware of my own agency more so than at age 23.

Consciousness in some circles has already been modeled. It is called a strange loop, referencing to self-reference. It points to itself, ego as in the "I". A computer is capable of self-reference just like a brain. The complexity of the referencing system determines the level of consciousness. A highly conscious brain has many links that point to each other exactly to maximize the sense of agency of the entity. It is super aware of it ability to impact the world. Computers would require a perception action cycle just as humans have to become aware of their own agency. It would need to develop because at the beginning complexity is low and self-awareness is weak. As with humans, it would need an education. A danger is that if the perception action cycle is not precisely regulated that the A.I. would develop a mental illness or personality disorder. All this can be done in software but it is a highly challenging endeavor.
 

Cognisant

Prolific Member
Local time
Yesterday 8:16 PM
Joined
Dec 12, 2009
Messages
10,564
-->
Consciousness is not a component but rather an emergent property of a system, our brains are like chinese rooms but rather than being given a book of instructions we have to come up with those instructions ourselves, our brains do this by what is essentially a form of statistical analysis.

Statistical analysis is the basis of pattern recognition and once you can recognise patterns you can learn them and compare them to uncover more abstract patterns, self awareness is just self recognition.

Edit: AK's post is spot-on.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 7:16 AM
Joined
Feb 4, 2016
Messages
282
-->
https://en.wikipedia.org/wiki/Chinese_room

Couldn't find this on here. But I came across it and want to see what people think.

Is a computer that translates consciousness from a set of rules, procedures, logic, and/or directives conscious? Is it any different from a human being?



My reasoning is that any conscious being is conscious by virtue of having impulses and simultaneously having some awareness of them and a hyper-rationality to channel or direct those impulses.

A computer that has no impulses does not have real consciousness, even if it passes the Turing Test. It simply translates consciousness as best it can, but does not actually feel conscious, without its own impetus.

I don't think a computer is fundamentally different than a human mind\brain (excluding awareness or phenomenal qualities)

Our understanding of language may not be fundamentally different here, then some syntax manipulations, some neural associations and whatever.

If you look at neural levels you won't find 'understanding', may be at best a couple of firing neurons, which correlates to some behavioral demonstration of understanding or contemplation on something or whatever.

The problem is the appearance of 'appearances' or 'phenomena' and 'qualias'. They more or less seems to be contingent or correlated to neural states, but overall rather mysterious. There are some answers trying to explain, but it's all controversial.

So, the answer to the question, if computers can have 'awareness' depends on the answer to the question, how man can have 'awareness'.

According to the integrated information theory of consciousness, if at the 'hardware level' there is sufficient physical integration, there will be consciousness in a computer.

But IIT, has it's problem (check IEP). IMO, simple integration is not enough. IMO, integration doesn't explain awareness in itself but simply the integration or structure in consciousness (without the structure, we may not lose phenomal reality, but lose the sense of it, and even think we were unconscious during the time. So dreamless sleep might not be truly unconsciousness but rather lack of a coherent structure. Liebniz would call it 'confused perceptions'. Also, there's no neuroscientific evidence for unconsciousness in sleep). And, I think for there to be some reflexive senseful consciousness, there needs to be a meaningful kind of integration. I doubt 'phi' (in IIT) really measures all the important factors. It might be on the right path though.
 
Top Bottom