kantor1003
Prolific Member
I will be trying to present an idea that just hit me. I will first have to explain what functionalism is, then machine-functionalism, and, lastly, how a turing-machine operates, before I take elements of this turing-machine and (mis)use it for my own twisted ends. The idea is severely underdeveloped, and most likely highly flawed, but I just had to make a quick attempt to articulate it in order to see how it turned up and to see whether it is something I should bother with pursuing any further. Responses from others is one way of determining this.
(I might, when I have more time available, try to present it in a more presentable, systematic and thorough manner)
Functionalism, as most of you perhaps know, looks at mental states, or types not as something to be identified with a particular physical constitution, but rather with what role, or function it plays in a causal system: ie. the mental state of being in pain isn't identified with c-fiber activation of neural fibers, or some other particular physical realizer (that would be the physicalistic psychoneural identity thesis), but with the function pain has. Let's say as a "tissue-damage detector". Many things in biology also operates with functional descriptions. A heart is defined not as whatever physical constitution it may have, but according to it's function: to circulate blood.
Machine functionalism is one of several varieties of functionalism which holds that the mind is nothing but a complex turing-machine. It takes some input - say someone hitting you - and it produces some output - for example you hitting back. Unlike behaviorism, however, it acknowledges mental states as something with causal power and it is to be included in our ontology. For example, when someone is hitting you (f1), it can lead to you feeling indignation towards whomever it was that hit you (m2), the subsequent desire to hit them back (m3), and ultimately to you hitting them (f4). To explain why you hit them, a description of the whole causal network is needed - both the mental, and the physical (in this sense, machine physicalism is a holistic theory). It will not do, as behaviorists do, to explain why you hit them in strictly behavioristic terms: having as object only the physical elements (f1 and f4) in the causal process.
Now, to get to my point I'll first have to give a brief overview of a turing-machine. In order to do this, I will be consulting Jaegwon Kim's "A philosophy of mind".
A turing machine consists of four components:
1. A tape divided into "squares" and unbounded in both directions
(visualize it like horizontally aligned squares like this I I I I I I (I I = square)
2. A scanner-printer ("head") positioned at one of the squares of the tape at any given time
I I I I I I
.I.
IheadI
3. A finite set of internal states (or configurations), q0, . . . , qn
I I I I I I
.I.
Ihead qiI
4. A finite alphabet consisting of symbols, b1, . . . , bm. One and only one symbol appears on each square. Ie. I&I I$I I8I
The machine operates according to these rules:
A. At each time, the machine is in one of its internal states, qi, and its head is scanning a particular square on the tape.
B. What the machine does at a given time t is completely determined by its internal state at t and the symbol its head is scanning at t.
C. Depending on its internal state and the symbol being scanned, the machine does three things: (1) Its head replaces the symbol with another (possibly the same) symbol of the alphabet. (2) Its head moves one square to the right or to the left (or halts, with the computation completed). (3) The machine enters into one of its internal states (which can be the same state).
The crucial element for my purposes here is rule B. That is, when the "head" is scanning for instance this square " I4I ", what it will do with it is not only determined by the symbol 4, but by the particular state the "head" is in (qi). What I now would like to try is, rather than picture (as is the case in machine functionalism) our whole psychology as a complex turing-machine, to instead think of the "head" as isomorphic to a particular mental state (for example the state of being happy), and the symbols as isomorphic to perceptual input: ie. when you see a flower. (I will be thinking of perceptual input widely, in that I can also have an idea (for example the idea of the flower, the idea of one owns self etc.), as perceptual input.)
If we then picture a long vertical set of symbols, the symbols being various perceptual instructions, these perceptual instructions - the way they will be interpreted, acted upon, perceived, experienced - is determined by your mental state. This means that any one symbol, ie IsI, will vary in what it instructs you to do depending on whether you are in mental state q1, or q2.
Usually, I propose, mental states remains largely within the same spectrum, for example q1-q10, and so the symbols, although varying according to the determined possibilities of q1-10, will be familiar due to much exposure. Now, if we introduce altered state x (meditation, dmt), or more precisely a set of q's not contained within ordinary range (q1-q10), to a q1-q10 subject, what will then happen? Will it be the case that it operates within q1-q10 range while in addition q11-qx(above11), or is it the case that q1-q10 is excluded from possible mental states, so that we only have q11-qx(above11)?
In any case, don't make the mistake of thinking that just because I've signified ordinary states using q1-q10, and altered state using q11-qx(above11) that I view altered states as something vertically above, or as somehow better, than ordinary states.
Anyway, this is just an idea.
(I might, when I have more time available, try to present it in a more presentable, systematic and thorough manner)
Functionalism, as most of you perhaps know, looks at mental states, or types not as something to be identified with a particular physical constitution, but rather with what role, or function it plays in a causal system: ie. the mental state of being in pain isn't identified with c-fiber activation of neural fibers, or some other particular physical realizer (that would be the physicalistic psychoneural identity thesis), but with the function pain has. Let's say as a "tissue-damage detector". Many things in biology also operates with functional descriptions. A heart is defined not as whatever physical constitution it may have, but according to it's function: to circulate blood.
Machine functionalism is one of several varieties of functionalism which holds that the mind is nothing but a complex turing-machine. It takes some input - say someone hitting you - and it produces some output - for example you hitting back. Unlike behaviorism, however, it acknowledges mental states as something with causal power and it is to be included in our ontology. For example, when someone is hitting you (f1), it can lead to you feeling indignation towards whomever it was that hit you (m2), the subsequent desire to hit them back (m3), and ultimately to you hitting them (f4). To explain why you hit them, a description of the whole causal network is needed - both the mental, and the physical (in this sense, machine physicalism is a holistic theory). It will not do, as behaviorists do, to explain why you hit them in strictly behavioristic terms: having as object only the physical elements (f1 and f4) in the causal process.
Now, to get to my point I'll first have to give a brief overview of a turing-machine. In order to do this, I will be consulting Jaegwon Kim's "A philosophy of mind".
A turing machine consists of four components:
1. A tape divided into "squares" and unbounded in both directions
(visualize it like horizontally aligned squares like this I I I I I I (I I = square)
2. A scanner-printer ("head") positioned at one of the squares of the tape at any given time
I I I I I I
.I.
IheadI
3. A finite set of internal states (or configurations), q0, . . . , qn
I I I I I I
.I.
Ihead qiI
4. A finite alphabet consisting of symbols, b1, . . . , bm. One and only one symbol appears on each square. Ie. I&I I$I I8I
The machine operates according to these rules:
A. At each time, the machine is in one of its internal states, qi, and its head is scanning a particular square on the tape.
B. What the machine does at a given time t is completely determined by its internal state at t and the symbol its head is scanning at t.
C. Depending on its internal state and the symbol being scanned, the machine does three things: (1) Its head replaces the symbol with another (possibly the same) symbol of the alphabet. (2) Its head moves one square to the right or to the left (or halts, with the computation completed). (3) The machine enters into one of its internal states (which can be the same state).
The crucial element for my purposes here is rule B. That is, when the "head" is scanning for instance this square " I4I ", what it will do with it is not only determined by the symbol 4, but by the particular state the "head" is in (qi). What I now would like to try is, rather than picture (as is the case in machine functionalism) our whole psychology as a complex turing-machine, to instead think of the "head" as isomorphic to a particular mental state (for example the state of being happy), and the symbols as isomorphic to perceptual input: ie. when you see a flower. (I will be thinking of perceptual input widely, in that I can also have an idea (for example the idea of the flower, the idea of one owns self etc.), as perceptual input.)
If we then picture a long vertical set of symbols, the symbols being various perceptual instructions, these perceptual instructions - the way they will be interpreted, acted upon, perceived, experienced - is determined by your mental state. This means that any one symbol, ie IsI, will vary in what it instructs you to do depending on whether you are in mental state q1, or q2.
Usually, I propose, mental states remains largely within the same spectrum, for example q1-q10, and so the symbols, although varying according to the determined possibilities of q1-10, will be familiar due to much exposure. Now, if we introduce altered state x (meditation, dmt), or more precisely a set of q's not contained within ordinary range (q1-q10), to a q1-q10 subject, what will then happen? Will it be the case that it operates within q1-q10 range while in addition q11-qx(above11), or is it the case that q1-q10 is excluded from possible mental states, so that we only have q11-qx(above11)?
In any case, don't make the mistake of thinking that just because I've signified ordinary states using q1-q10, and altered state using q11-qx(above11) that I view altered states as something vertically above, or as somehow better, than ordinary states.
Anyway, this is just an idea.