• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Machine functionalism as an interesting starting point for understanding altered states?

kantor1003

Prolific Member
Local time
Today 9:29 PM
Joined
Aug 13, 2009
Messages
1,574
---
Location
Norway
I will be trying to present an idea that just hit me. I will first have to explain what functionalism is, then machine-functionalism, and, lastly, how a turing-machine operates, before I take elements of this turing-machine and (mis)use it for my own twisted ends. The idea is severely underdeveloped, and most likely highly flawed, but I just had to make a quick attempt to articulate it in order to see how it turned up and to see whether it is something I should bother with pursuing any further. Responses from others is one way of determining this.
(I might, when I have more time available, try to present it in a more presentable, systematic and thorough manner)

Functionalism, as most of you perhaps know, looks at mental states, or types not as something to be identified with a particular physical constitution, but rather with what role, or function it plays in a causal system: ie. the mental state of being in pain isn't identified with c-fiber activation of neural fibers, or some other particular physical realizer (that would be the physicalistic psychoneural identity thesis), but with the function pain has. Let's say as a "tissue-damage detector". Many things in biology also operates with functional descriptions. A heart is defined not as whatever physical constitution it may have, but according to it's function: to circulate blood.
Machine functionalism is one of several varieties of functionalism which holds that the mind is nothing but a complex turing-machine. It takes some input - say someone hitting you - and it produces some output - for example you hitting back. Unlike behaviorism, however, it acknowledges mental states as something with causal power and it is to be included in our ontology. For example, when someone is hitting you (f1), it can lead to you feeling indignation towards whomever it was that hit you (m2), the subsequent desire to hit them back (m3), and ultimately to you hitting them (f4). To explain why you hit them, a description of the whole causal network is needed - both the mental, and the physical (in this sense, machine physicalism is a holistic theory). It will not do, as behaviorists do, to explain why you hit them in strictly behavioristic terms: having as object only the physical elements (f1 and f4) in the causal process.
Now, to get to my point I'll first have to give a brief overview of a turing-machine. In order to do this, I will be consulting Jaegwon Kim's "A philosophy of mind".
A turing machine consists of four components:

1. A tape divided into "squares" and unbounded in both directions
(visualize it like horizontally aligned squares like this I I I I I I (I I = square)

2. A scanner-printer ("head") positioned at one of the squares of the tape at any given time
I I I I I I
.I.
IheadI

3. A finite set of internal states (or configurations), q0, . . . , qn
I I I I I I
.I.
Ihead qiI

4. A finite alphabet consisting of symbols, b1, . . . , bm. One and only one symbol appears on each square. Ie. I&I I$I I8I

The machine operates according to these rules:
A. At each time, the machine is in one of its internal states, qi, and its head is scanning a particular square on the tape.
B. What the machine does at a given time t is completely determined by its internal state at t and the symbol its head is scanning at t.
C. Depending on its internal state and the symbol being scanned, the machine does three things: (1) Its head replaces the symbol with another (possibly the same) symbol of the alphabet. (2) Its head moves one square to the right or to the left (or halts, with the computation completed). (3) The machine enters into one of its internal states (which can be the same state).

The crucial element for my purposes here is rule B. That is, when the "head" is scanning for instance this square " I4I ", what it will do with it is not only determined by the symbol 4, but by the particular state the "head" is in (qi). What I now would like to try is, rather than picture (as is the case in machine functionalism) our whole psychology as a complex turing-machine, to instead think of the "head" as isomorphic to a particular mental state (for example the state of being happy), and the symbols as isomorphic to perceptual input: ie. when you see a flower. (I will be thinking of perceptual input widely, in that I can also have an idea (for example the idea of the flower, the idea of one owns self etc.), as perceptual input.)
If we then picture a long vertical set of symbols, the symbols being various perceptual instructions, these perceptual instructions - the way they will be interpreted, acted upon, perceived, experienced - is determined by your mental state. This means that any one symbol, ie IsI, will vary in what it instructs you to do depending on whether you are in mental state q1, or q2.

Usually, I propose, mental states remains largely within the same spectrum, for example q1-q10, and so the symbols, although varying according to the determined possibilities of q1-10, will be familiar due to much exposure. Now, if we introduce altered state x (meditation, dmt), or more precisely a set of q's not contained within ordinary range (q1-q10), to a q1-q10 subject, what will then happen? Will it be the case that it operates within q1-q10 range while in addition q11-qx(above11), or is it the case that q1-q10 is excluded from possible mental states, so that we only have q11-qx(above11)?

In any case, don't make the mistake of thinking that just because I've signified ordinary states using q1-q10, and altered state using q11-qx(above11) that I view altered states as something vertically above, or as somehow better, than ordinary states.

Anyway, this is just an idea.
 

Cognisant

cackling in the trenches
Local time
Today 10:29 AM
Joined
Dec 12, 2009
Messages
11,155
---
Usually, mental states remains largely within the same spectrum, for example q1-q10, and so the symbols, although varying according to the determined possibilities of q1-10, will be familiar due to much exposure. Now, if we introduce altered state x (meditation, dmt) to a q1-q10 subject, what will then happen? Will it be the case that it operates within q1-q10 range while in addition q11-qx(above11), or is it the case that q1-q10 is excluded from possible mental states, so that we only have q11-x(above11)?
An apt analogy, but is there any functional benefit?
Take any videogame NPC, alter its behavioural variables and it'll act differently, likewise there's no denying that meditation and drugs can alter one's own "behavioural variables", but is there any benefit to this or is it just proof that the brain is an information processor and the mechanisms of it can be screwed with?

I can accept that hallucinogenic drugs can assist with creativity, just as Ritalin can help with concentration and alcohol can be used for relaxing inhibitions, but when someone starts talking about enlightenment, mystic knowledge or having psychic abilities like telepresence or whatever, frankly I think they've had too much.

Ritalin helps with focus, but it won't stop you staring at a wall for hours, caffeine will wake you up but it won't necessarily make you more productive, alcohol's ability to lower inhibitions can be useful, but it can also get you in trouble, and I'm sure that like everything else drugs like DMT, LSD, can do more harm than good if misused.

How they're correctly used I don't know, I've got my own alcoholism to manage :D
 

Cognisant

cackling in the trenches
Local time
Today 10:29 AM
Joined
Dec 12, 2009
Messages
11,155
---
Now getting back to Turing machines, I think Alan Turing already figured out AI and took the secret with him, from what I've read of his, he knew, he totally fucking knew, decades later when AI projects were looking into abstract emulations of intelligence only to fail miserably and only now have begin to return to focusing on simply machine learning, he had to have been laughing in his grave.

General artificial intelligence is actually quite a simple proposition, it's just vexingly unintuitive (just like the cryptology work Turing did) because really a AGI does is respond to context in an appropriate manner and learn by feedback, what could be simpler?

Well there's no shortage of learning algorithms out there, but they only do specific learning, not general learning, and even those few that can do general learning lack the ability to formulate abstractions based upon their observations or act upon what they've learnt.

In a way all the world is a cryptology problem, to succeed (based upon whatever criteria is relevant to you), you need to figure out what to do and how to do it, out of a practically infinite number of things you could do in different ways, and of course brute force computation can solve the problem, there's too much information, life is a combination lock with a billion cams.

Of course no lock is perfect :twisteddevil:
Unless the lock is machined precisely, when the pin is pulled outward, one of the teeth will pull more strongly than the others on its corresponding disc. This disc is then rotated until a slight click is heard, indicating that the tooth has settled into the notch. The procedure is repeated for the remaining discs, resulting in an open lock, and a correct combination, in very little time.

True AGI would be the perfect cryptology tool.
 

kantor1003

Prolific Member
Local time
Today 9:29 PM
Joined
Aug 13, 2009
Messages
1,574
---
Location
Norway
An apt analogy, but is there any functional benefit?
Take any videogame NPC, alter its behavioural variables and it'll act differently, likewise there's no denying that meditation and drugs can alter one's own "behavioural variables", but is there any benefit to this or is it just proof that the brain is an information processor and the mechanisms of it can be screwed with?
Thank you for your input:)

I am here mainly concerned not with whether there is any benefit in altered states, but whether the phenomenon can appropriately be captured using the concept of turing-machines in the way attempted above. So the question is, is it a valid isomorphism? Once that has been answered, if in the affirmative, one could try to use that particular concept of turing machines as applied to what can perhaps be called a theory of perception, and then use that as a foundation to answer questions such as the one you just posed. If in the negative, I should dispose of the whole idea.

To see how that might go, let's suppose that it is valid (which it most likely isn't in it's current form). Now we could (I believe) ask a question such as this: Is it the case that a q1-q10 turing machine is stronger than a q1-q20 machine? Let's for purpose of illustration say yes, even though I'm by no means sure about this. Then we would have to determine if it is the case that an altered state include q1 through q10 in addition to q11 through q20, or if it only include q11 through q20. If it's the former, then, without examining the individual states (q's), we must conclude that a q1-q20 human is stronger. If it's the latter, we can't, without examining the individual states, determine whether it's stronger, or weaker. This assumes in addition, of course, that there is a change of states (q's) occurring in what we call altered states. I think experience, however, informs us that it is a valid assumption.
 

Cognisant

cackling in the trenches
Local time
Today 10:29 AM
Joined
Dec 12, 2009
Messages
11,155
---
Is it the case that a q1-q10 turing machine is stronger than a q1-q20 machine?
What do you mean by "stronger"?

As an information processor it's not running any faster or processing any more efficiently, it's just a different response to input, like how changing the software on your computer dosen't change the hardware properties, though some software may make better use of that hardware.

More literally if you're talking about a robot being stronger because it thinks it's stronger, no, you might get that idea hearing about how someone high on cocaine punched through a car windshield but that's not because they're stronger, it's because their self protection mechanisms have been disabled, in more advanced robots you can do the same thing, you can run a motor at a higher voltage than it's intended to receive and for a few seconds and it'll perform better, until it overheats as some component or another fails.

Probably the best example of this is a solenoid, since the heat of the coils depends upon the amount of current passing through them over time, by reducing the amount of time current is passing through it you can safely increase the current and thus the actuating power of the solenoid, to a degree. This enable small solenoids to do useful things like shatter glass, press a button, or open/close a valve, and big ones to punch a spike through another robot's armour, e.g. a ream in a hose can be jammed through armour, and it may not go in far enough to do much damage but if the punch is followed by high pressure salt water coming through the hose it can short circuit the enemy bot.

That tactic is banned by every professional robot fighting tournament I know.
Street fights only.
 

kantor1003

Prolific Member
Local time
Today 9:29 PM
Joined
Aug 13, 2009
Messages
1,574
---
Location
Norway
What do you mean by "stronger"?
As an information processor it's not running any faster or processing any more efficiently, it's just a different response to input, like how changing the software on your computer dosen't change the hardware properties, though some software may make better use of that hardware.
If that is the case, then we may have an answer to your earlier question regarding altered states: "is there any functional benefit?" No, "it's* just a different response to input", even though some states** "may make better use of" it.

*that would be a particular set of states
**we've yet to determine what kinds

With regards to what I meant by stronger in this case, I think it's fine as you alluded to: the turing-machine's, or, if the isomorphism is accurate, our ability to process faster, or more efficiently.
 

Cognisant

cackling in the trenches
Local time
Today 10:29 AM
Joined
Dec 12, 2009
Messages
11,155
---
Well y'know Ritalin helps people focus, and if I feel nervous and need to do something concentration intensive I'll drink, but that's not to say alcohol is magic juice that makes me better at things, playing a FPS while slightly buzzed is good for sniping (you have to stay relaxed under pressure) but up close my reaction speed is reduced, so there's a trade off.

There is a World of Warcraft raider type called the "Drunken DPS". These people (usually mages) started raids sober or slightly buzzed. As the raid progressed, they would get drunker and drunker, and their damage output would rise as well. Apparently they were pretty valuable, provided they didn't fall off their chairs while facing the dungeon boss. Drunken Healers exist as well. Really good healers during battles but have to be reminded which instance they are in from time to time. This has much to do with the incredible amount of stress that can be placed upon a raid healer in certain encounters. Where a sober player would be jittery from adrenaline and potentially lock up, a drunken player will be calmer and can actually respond more quickly.

http://tvtropes.org/pmwiki/pmwiki.php/Main/DrunkenMaster

But this is like changing settings on your computer, you can't make a computer physically faster (well you can overclock it, see my last post) but you can optimise it for different tasks, y'know I'm not going to get that smooth sniping effect without losing the edge of my up close reaction speed, it just dosen't work that way.
 

Cognisant

cackling in the trenches
Local time
Today 10:29 AM
Joined
Dec 12, 2009
Messages
11,155
---
Remember evolution has shaped you, a biological machine of astounding sophistication, and evolution is largely a process of refinement, so the idea that there's any unused brain space just waiting to be tapped by the influence of drugs is, if not outright impossible, then highly unlikely, indeed I'd quite comfortably guarantee you (monetarily) that no matter what drug you take or what meditation you do there's always going to be a trade off.

For the same reason I think people who do altitude training are idiots, unless they're training to climb to high altitudes, but for everyone else the blood thickening just means the heart has to work harder, so although there may be a short term benefit given the amount of oxygen the blood can carry, over a period of sustained exertion the thicker blood will put more strain on the lungs and vascular system, nullifying and likely counteracting any earlier benefits.

Have you seen the amount of food triathletes eat? It's phenomenal.
They have to eat so much, and so regularly, because they've optimised their bodies to have as high a metabolism as possible, they have almost no fat reserves and their intestinal tracts have shrunk, if they don't eat regularly they crash, when a triathletes collapses his body has literally run out of fuel and it putting him in shutdown so it can cannibalise the muscles and to a lesser extent the brain.
 

walfin

Democrazy
Local time
Tomorrow 5:29 AM
Joined
Mar 3, 2008
Messages
2,436
---
Location
/dev/null
But considering that the range of the outputs (the next mental state) is q1-q10, wouldn't it be the case that even if the internal state (an input) is q11, the output would still be from q1-q10?

Which means that no change to an unusual state would last.
 

kantor1003

Prolific Member
Local time
Today 9:29 PM
Joined
Aug 13, 2009
Messages
1,574
---
Location
Norway
But considering that the range of the outputs (the next mental state) is q1-q10, wouldn't it be the case that even if the internal state (an input) is q11, the output would still be from q1-q10?

Which means that no change to an unusual state would last.

At any time, the turing machine's "head" will only be in one state, say q1. Any particular state, I look at as neither input nor output, even though it determines output (I am not sure about input). Let me try to make this a little bit more clear by providing an example of a simple turing-machine borrowing from jaegwon kim:


Here is a "tape" consisting of a set of symbols.
image1.png


At any given point, the "head" will be scanning one of these symbols. Picture the "head" being in state q0 at the first "1". Now, let's look at an example of a machine table:


The leftmost column lists all the symbols. The top row lists all the machine's internal states. Each entry in the interior matrix is an instruction and tells the machine what to do when encountering any of the symbols. Since the "head" is now located on the symbol "1" and is in internal state q0, we can see, looking at the machine table, that it gives the following instruction "1Rq0". This means that it will replace the symbol "1" with 1, move to the right by one square, and enter internal state q0 (the same state). When it reaches the symbol "+", we see, by looking at the instruction "1Rq0" on the machine table, that it will replace the "+", with a "1", move to the right by one square, and enter internal state q0 (the same state).
When it scans "#", it will have the following instruction "#Lq1" which means that it will replace the "#", with a "#" and move one square to the left, and enter internal state q1. Now, when it encounters "1" we see, looking at the machine table on the set of instructions under the vertical column "q1", that it will have the instruction to replace "1" with "#" and halt.

What I was thinking is whether I could use the symbols to represent perceptual input, and the machine's internal states, to represent mental states, or types. If so, it would mean that any particular state we are in plays an important role with regards to how we process perceptual input. I haven't yet found out to what extent this agrees with machine-functionalism. I know that machine-functionalism holds that we can think of the mind as a turing-machine, and that the turing-machine's internal states is identified with mental state types. Any psychological organism, according to that view, is a physical realizer of a turing-machine and that is how it process input-output. I differ from machine functionalism, I think, when I identify the symbols specifically with perceptual input, because machine-functionalism doesn't use the concept of a turing machine, as far as I know, to specify the organisms input-output, but only to explain how it process it.
 

Agent Intellect

Absurd Anti-hero.
Local time
Today 4:29 PM
Joined
Jul 28, 2008
Messages
4,113
---
Location
Michigan
I wouldn't say that this is a valid isomorphism so much as a metaphor.

I wonder if it would be more interesting if the mental states represented logic commands (admittedly, I'm not intimately familiar with Turing machines, so perhaps this is how it works anyway). For instance, q3 might have the command "if input q0 to qn, then move left" and "if input qn+1 to qn+m, then move right" rather than having any particular mental state determine only a single command. This way there is a threshold for different outputs that could be determined by what would be analogous to a particular machines 'personality'.

This way, increased states beyond q10 could either increase or decrease "divergent thinking."
 
Top Bottom