I would say the underlying thing I am getting from this is that you don't separate the brain from the mind. As such, you think AI can actually have "thoughts". I don't think an AI has the ability to have thoughts.
We have AIs that can interpret speech, that recognize objects and people’s faces in an image, that can make decisions and solve problems by hypothesizing and experimentation, we even have AIs that can create art and music and tell original jokes.
Of course none of this actually matters, I could introduce you to a genetically engineered simulacrum that’s superficially and behaviourally indistinguishable from human (until you look at its artificial cells under a microscope) and you would still be utterly convinced that an artificial being cannot have thoughts. The reason being the idea that you are special is at the very core of your belief structure, foundational even, and to contradict that would bring everything else crashing down. So no matter what arguments I make or how conclusive my evidence is you will always take refuge in the possibility that I might be wrong, even if I force you into a position of utter epistemological nihilism you will go there before you give up your beliefs.
And here’s the kicker, from that position of epistemological nihilism you’ll keep arguing, despite casting doubt on the idea that anyone can know anything at all you will continue to hypocritically preach your beliefs (that you KNOW god exists) to others in a vain attempt to justify them to yourself. All because the very idea that you might be nothing more than flesh and bone, that there’s nothing magical or transcendental about you, that one day you’ll die and you’ll just be dead, no afterlife, no great beyond, just dead, is so utterly horrifying to you that you would believe literally anything else.
So excuse me if I don’t engage with you in what will ultimately be an endless semantic discussion on the definition of “thought” in which I chase your ever moving goalposts.
Even in the case that someone said they would shoot someone else based on whether I recanted or not (I truly cannot think of a realistic situation of where or when this has happened because people don't judge someone based on the decisions of someone else and this seems to be baked into us) I would choose to do nothing.
It’s a thought experiment, it’s not a question of whether it’s realistic or not it’s about working out how you define morality and your inaction is no less damning than telling the gunman to shoot the innocent person. The ultimatum was to disavow god or the innocent person will die, by not answering you refused to disavow god and therefore they died. To me that seems rather callous but I wonder whether it was your callousness or your assumption of god’s callousness that motivated your decision.
If god is good I doubt god would approve of you sacrificing an innocent person’s life its name and if you disavowed god to save the innocent’s life then a good god would understand that, probably even appreciate your humility. Perhaps you didn’t consider god’s autonomy in this scenario, you assumed that since god is always good worshipping god is always good therefore disavowing god is always bad, how very robotic of you. Or perhaps it was simply obstinace to having your faith tested, an obstinace born of pride. Or, and this is the worst case, you acted callously because you believe god is callous and wouldn’t forgive you disavowing it even if you did so to save an innocent’s life, if that is the case do you willingly worship this callous god (i.e. are you an awful person) or do you worship it out of fear?
We have to remember that an act of Will is a demonstration of the mind. That being said, our behaviors are a manifestation of the Will in more explicit terms, but this doesn't change the fact that someone else cannot really and truly have control over my will, that is, my thought life. I think my own thoughts. That is something no one has control of except me. We can talk about external factors having some control over what I think, but the fact is, my thoughts are mine and no one else. So while I may not be able to separate my thoughts from my behaviors, if it's the case of restraint on my body, I still have a Will over my thoughts that no one can actually control.
Well not without mind altering drugs or sticking electrodes in your brain, or just inflicting pain, nobody can resist torture for long, everybody breaks, but the problem is a broken person will tell you whatever they think you want to hear which makes torturing people for information pointless unless you’re trying to get something very specific like a computer password.
Personally, I am a property dualist.
That’s not compatible with the concept of an immortal soul, property dualism is like how adding software to a computer doesn’t add any mass or energy but rather changes states in the computer’s memory, like moving beads on an abacus doesn’t fundamentally change the abacus even though it changes the numerical value “stored” by the abacus. The concept of an immortal soul presumes that there’s something that survives after death, that for example getting shot in the head destroys the brain but not the soul, indeed the soul cannot be affected by physics even though as an observer and possibly puppeteer it still somehow interacts with the physical world, it’s all very paradoxical.
Possibly reality as we know it is actually a simulation and therefor it would be trivial for the creator of this simulation to add some script that’ll pause it whenever someone dies, make a copy of their mind in the moments prior to their death and then continue it with nothing in the simulation possibly noticing that anything occurred. But you’ll still be dead, there’s an exact copy of you that believes it is you and depending upon the whims of the one running the simulation the copy might not even know that you died, but it’s still just a copy, not the original.
I have a couple questions for you.
How do you categorize freedom? Is it just based on behaviors, or something else?
Do you think AI can be truly conscious? Can it be self aware?
How do you quantify the mind? Or rather, how do you account for thoughts?
I’ll start with your last question and work backwards.
Suppose you show a TV to someone who has never encountered modern technology before and they ask you, “who are these little men and how do you get them to stay in this box and perform for you” clearly they lack the concepts for recording, storing, transmitting and displaying a projection of audio/visual content. Trying to understand how the mind works through introspection is a lot like trying to understand the underlying concepts of how modern technology works by staring at a TV screen, what you’re seeing isn’t what’s actually happening, what you’re seeing is merely the output from a very long complicated process.
On an information theory level the brain or an AI mind works by performing a kind of statistical analysis, if we know absolutely nothing but two inputs coincide then we know that there’s a possibility that they’re related somehow. If we watch all our inputs over a long period of time and record the frequency of these coincidences we’ll be able to map out relationships between these inputs based on how frequently these coincidences occurred relative to the average occurrence of all coincidences. Now these inputs could be points on a visual array (of unknown shape and size) and by working out the statistical relationships of these points we can deduce the arrangement of these points on the array. With this rudimentary vision we can use the coincidence of our inputs as inputs themselves on another layer of this statistical model, then record the frequency of coincidences to get another layer of relationships. Then if we repeat this several times to create several more layers of inputs and relationships our AI can begin to make complex abstract associations, like a point of light moving across its visual array.
For the sake of brevity and everyone’s sanity (my own included) I’m not going to give an exhaustive explanation of how I think all aspects of cognition work, suffice to say there’s a lot of them and they’re all really complicated. The point of that last paragraph was simply to illustrate that a mind can’t see its own proverbial gears turning, the AI I was describing wouldn’t be aware of the enormous amount of statistical analysis going on to enable it to perceive a dot moving across its visual field, it just perceives a moving dot.
I am totally convinced that an AI can be conscious, self-aware, sentient, capable of subjective emotional experiences (qualia) and whatever other words you’d like come up with for why humans are supposedly special. Again as utterly ineffable as your qualia may seem to you that’s just because you’re on the output end, you can’t see your own proverbial gears turning, if you could (or rather if you understood the underlying processes) you would understand that feelings are just how your body affects your decision making process and your awareness of this is a feedback loop that’s supposed to perform error checking but most people use it for self-delusion.
Freedom in general is my ability to do things without resistance, I am for example not free to walk through a brick wall although I am absolutely free to try. As I explained before free will is a matter of accountability and like freedom in general it’s a matter of degree, if I steal food because I’m starving to death my accountability is low because despite having a choice I obviously didn’t have much of a choice.