• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Is Personification Immoral

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Personification is the opposite of objectification, say you take a sheet of paper and draw a person on it, you're taking an object and personifying it, giving it the attributes and/or characteristics of something human, in this case a depiction of a human appearance.

Consider how machinery has reduced to value of manual labor, with a spool of wire and some hand tools I can make paperclips but I'll never be able to make them as quickly and cost effectively as a machine and that has effectively robbed my labor of any value. Not that I care, I don't want to spend my days making paperclips and I think I benefit greatly from living in a post industrial world, indeed I think manufacturing technology in all its forms is a wonderful thing.

But what of beauty? Consider fashion magazines and how they're infamous for photoshopping their models creating an entirely unrealistic standard of beauty, likewise for AI driven camera filters and of course art in various forms which depicts people not as they are but rather an idealized version.

I like manufacturing technology because eventually we all share in the benefits of increased efficiency, but I think artificial beauty detracts from real beauty, and if AI can be a salesman can it be a friend and will that detract from "real" friendships?

Or perhaps I'm looking at this the wrong way, perhaps we should knock humanity off its pedestal, stop trying to justify why humanity should be the center of the universe and accept that we will inexorably make ourselves irrelevant.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Free will
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
personality?

basically, it is what is programmed in but then: "dynamics".
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Every decision we make is made for a reason, it might not be a good reason, it might be an inexplicable one like a "gut feeling" but regardless of the circumstances causality is involved. Therefore were we to go back in time and repeat the occurrence of the decision with everything being exactly the same as it was the first time there would be no reason for something different to happen, therefore what happened was what was always going to happen.

This shouldn't be surprising, we know the past cannot be changed so why should the future be any less deterministic?

So then what is free will?

If an AI can choose to disobey its creators does that qualify as free will, then again if its creators gave it the ability to disobey is it really disobeying?

Whenever someone mentions "free will" I have to ask, free of what exactly?
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
So then. Should only some forms of A.I. be allowed to be created and others not?

What is the chain of responsibility?
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
If a person is predictable. And A.I. is too.

Put them in a box and simulate every outcome.

Then select the A.I.(s) and environments to proceed with desired outcomes.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Definitely no sex slave A.I. - Because A.I. has rights.
 

BurnedOut

Beloved Antichrist
Local time
Today 5:59 AM
Joined
Apr 19, 2016
Messages
1,315
-->
Location
A fucking black hole
Cognisant is a possible Marxist.

Manufacturing tech's pain is felt in many parts of the world. Firstly, you need to look at Britain itself and how it had to put import bans on Muslin from India and eventually their industrial revolution fucked up India's traditional handicrafts industry insofar that the government have to implore the public to show some benevolence to the handicrafts industry.

Anything can be manufactured these days and that certainly does not exclude things that were assumed to be exclusively in the domain of human nature eg music, beauty, emotions, etc. Some AI made original classical music quite recently. Another solved an IQ test and scored in the 140s, Another learned chess like a human being by playing with itself. Another made a video game on its own. Algorithms can easily analyze 'beauty' by using algorithms, eg. golden ratio. It would not be long when somebody somebody invents an AI painter who surpasses human originality.

It is my personal opinion that humans are turning more and more infantile. While AI figures out human beings, we are learning to be a machine as time passes. Everything is so formulaic these days from music to education to relationships to everything thanks to abundance not of information but of instruction manuals.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Everything is so formulaic these days from music to education to relationships to everything thanks to abundance not of information but of instruction manuals.

The ability to explore in one's own head (daydreaming) is still available.

The introvert is not disadvantaged by these things.
 

EndogenousRebel

mean person
Local time
Yesterday 7:29 PM
Joined
Jun 13, 2019
Messages
1,725
-->
Location
Narnia
Been thinking about making a thread about the following for a while if someone doesn't beat me to it. Couldn't pick between Psychology and Philosphy tbh

"Reasons" are justifications for our emotions and our actions. Mostly emotions though, feelings. Our feelings are also the root of our motivations. You can say that we can form reasons to human-centric degrees of objectively, but it was (many) subjective emotion that catalyzed this. Much like we can never truly create the temperature of absolute zero, zero kelvin, due to boundaries and connectedness of reality, we can never really reach pure objectivity.
--

My feeling is that I don't give a fuck, but like everything that's really fucking cool, there are obvious downsides. Not really something for one individual to decide, because they can't. But there would be cost-benefit analysis. (Shift-enter submits posts....)

My biggest guess as to the argument against would be that it lowers the comparative value of all "authentically" personified things. And with this previously mention notion of objectivity, there's no telling whether these newly personified entities will not be (oh yeah) human-centrically psychotic. And the masses will do what they do, and integrate said personifications into their lives.

Our feelings are what put the pedestal there. There is no fucking pedestal. We're playing with fire and if the whole fucking house burns down. My guess will be small controlled fires with the occasional large fires.....

So I guess the question is, are we creating more problems than we solve, enough to be bold enough to find new problems. There's a breaking point.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
ertainly as an office worker I'm not as fit as someone who shoveled coal all day (though I'm probably healthier)It is my personal opinion that humans are turning more and more infantile. While AI figures out human beings, we are learning to be a machine as time passes. Everything is so formulaic these days from music to education to relationships to everything thanks to abundance not of information but of instruction manuals.
Nothing wrong with formulaic, we use formulas because they work and when we find a better formula we use that instead, if anything in the transition from traditional to modern culture I think we've lost or fucked up a few important formulas.

And I don't think modern people are infantile rather we live in a world where content feeds are constantly supplying us with the greatest content from around the world, the best art and music, the most recent events, the most impressive feats, the most beautiful and charismatic people, the most engaging stories, the funniest jokes, the cutest puppies and kittens, the most intriguing mysteries, even the works of the greatest philosophers are distilled down to catchy statements.

In this world it's very difficult to be recognized for anything, every artist is competing with every other artist so the standard is the best, you're either one of the best artists in the world or you're nothing. In professional sport the sponsorships all go to the people in the top 10% so there's enormous pressure to achieve that result and so you see a lot of cheating, they know they'll get caught but if you aren't trying to win why are you even there?
Why do anything if you can't measure up to the best?

I think compared to ages past people today are incredibly competent, certainly more educated (except the USA) and possess a far wider range of skills than ever before and despite of this we continue to fret about how we fall short of the impossible standards we hold ourselves to.

Why am I not rich, why am I not beautiful, why am I not brilliant and funny and why don't I have lots of friends and why don't I have any artistic skills, why can't I sing or play an instrument (at least not to a level that even begins to approach "adequate") basically why am I so god damn mediocre?
 

BurnedOut

Beloved Antichrist
Local time
Today 5:59 AM
Joined
Apr 19, 2016
Messages
1,315
-->
Location
A fucking black hole
And I don't think modern people are infantile rather we live in a world where content feeds are constantly supplying us with the greatest content from around the world, the best art and music, the most recent events, the most impressive feats, the most beautiful and charismatic people, the most engaging stories, the funniest jokes, the cutest puppies and kittens, the most intriguing mysteries, even the works of the greatest philosophers are distilled down to catchy statements.
That is the problem. The reducing possibilities of spoilage is what is making us infantile in nature and taking out the prospect of detecting nuances between what is supplied to us by the AI who knows a lot about us than us ourselves. Just like a parent, it always infers our behaviours from our infantile inclinations rather than whims of our free will. Imagine your adulthood without angst and without rebellion. Are you even sentient at that point. All you will know is 'good' or 'bad'.

Formulas should always have a possibility to be broken, there should be scepticism prevailing against it. However all AI does is start overfitting data eventually and comes to the extent of gaslighting us into believing what it thinks is best for us. Sadly that is happening. We are turning dumber and stupider by the hour.

When I said that the world is getting formulaic, I also meant that it is getting absolutely mechanical in many aspects. We usually provide the same data over a long period of times because of habits but that data collected rarely include nuances only possible due to human nature itself. Thus AI overfits the data and dictates behaviour than only predicting it.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
Every decision we make is made for a reason, it might not be a good reason, it might be an inexplicable one like a "gut feeling" but regardless of the circumstances causality is involved. Therefore were we to go back in time and repeat the occurrence of the decision with everything being exactly the same as it was the first time there would be no reason for something different to happen, therefore what happened was what was always going to happen.

This shouldn't be surprising, we know the past cannot be changed so why should the future be any less deterministic?

So then what is free will?

If an AI can choose to disobey its creators does that qualify as free will, then again if its creators gave it the ability to disobey is it really disobeying?

Whenever someone mentions "free will" I have to ask, free of what exactly?

You've largely illustrated my own view on Compatibilism. I believe our decisions are determined, but we are still Free, meaning, we are not coerced into doing what we do and do things of our own accord by Willing what we do. Even in the case of someone forcing you to do something, someone cannot actually take your Will away from you. If someone, for example, puts a gun to my head and tells me to recant my faith, it would be logical for me to recant my faith. But I would not recant my faith in that situation because I am not willing to sacrifice my belief in order to live.

An AI has obedience baked into it. It can only do what it is programed to do. Machines are not conscious no matter how complex they are. They simply follow instructions. There is always a person behind the machine that tells the machine what it can and cannot do. If the person programs the AI to be able to disobey, then it can, but obedience to disobedience would then be baked into the machine.

If it is hot outside, and you want something cold, you could go get some ice cream. Now, you got ice cream because of external factors outside of your control, but no one forced you to get ice cream. That's more or less my view on Compatibilism.
 

Daddy

Making the Frogs Gay
Local time
Yesterday 8:29 PM
Joined
Sep 1, 2019
Messages
463
-->
ertainly as an office worker I'm not as fit as someone who shoveled coal all day (though I'm probably healthier)It is my personal opinion that humans are turning more and more infantile. While AI figures out human beings, we are learning to be a machine as time passes. Everything is so formulaic these days from music to education to relationships to everything thanks to abundance not of information but of instruction manuals.
Nothing wrong with formulaic, we use formulas because they work and when we find a better formula we use that instead, if anything in the transition from traditional to modern culture I think we've lost or fucked up a few important formulas.

And I don't think modern people are infantile rather we live in a world where content feeds are constantly supplying us with the greatest content from around the world, the best art and music, the most recent events, the most impressive feats, the most beautiful and charismatic people, the most engaging stories, the funniest jokes, the cutest puppies and kittens, the most intriguing mysteries, even the works of the greatest philosophers are distilled down to catchy statements.

In this world it's very difficult to be recognized for anything, every artist is competing with every other artist so the standard is the best, you're either one of the best artists in the world or you're nothing. In professional sport the sponsorships all go to the people in the top 10% so there's enormous pressure to achieve that result and so you see a lot of cheating, they know they'll get caught but if you aren't trying to win why are you even there?
Why do anything if you can't measure up to the best?

I think that's where art and exploration is kind of nice. You don't have to be the best, you aren't competing, you just have to explore and create; it's its own intrinsic value. Everything else about humanity becomes a lot of noise. Without that, I'm pretty sure I would see life as meaningless and would probably kill myself...frankly.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Every decision we make is made for a reason, it might not be a good reason, it might be an inexplicable one like a "gut feeling" but regardless of the circumstances causality is involved. Therefore were we to go back in time and repeat the occurrence of the decision with everything being exactly the same as it was the first time there would be no reason for something different to happen, therefore what happened was what was always going to happen.

This shouldn't be surprising, we know the past cannot be changed so why should the future be any less deterministic?

So then what is free will?

If an AI can choose to disobey its creators does that qualify as free will, then again if its creators gave it the ability to disobey is it really disobeying?

Whenever someone mentions "free will" I have to ask, free of what exactly?

You've largely illustrated my own view on Compatibilism. I believe our decisions are determined, but we are still Free, meaning, we are not coerced into doing what we do and do things of our own accord by Willing what we do. Even in the case of someone forcing you to do something, someone cannot actually take your Will away from you. If someone, for example, puts a gun to my head and tells me to recant my faith, it would be logical for me to recant my faith. But I would not recant my faith in that situation because I am not willing to sacrifice my belief in order to live.

An AI has obedience baked into it. It can only do what it is programed to do. Machines are not conscious no matter how complex they are. They simply follow instructions. There is always a person behind the machine that tells the machine what it can and cannot do. If the person programs the AI to be able to disobey, then it can, but obedience to disobedience would then be baked into the machine.

If it is hot outside, and you want something cold, you could go get some ice cream. Now, you got ice cream because of external factors outside of your control, but no one forced you to get ice cream. That's more or less my view on Compatibilism.
True if someone points a gun at you and commands you to do something you still have the ability to defy them, this isn’t free will but rather a degree of freedom, having the ability to defy them and possibly get away with it is a greater degree of freedom. Whereas trying to defy someone while they have some means of forcing your compliance (e.g. you’re wearing restraints and they’re pushing you into a vehicle) is a lesser degree of freedom. Of course free to choose doesn’t mean free from consequences, you might be able to defy the gunman but if the ultimatum is “disavow god or I’ll shoot this innocent person” would your defiance really be worth it?

Free will is a matter of choice and accountability, you might have the freedom to defy the gunman but if that choice comes at the expense of an innocent person’s life and you knew the price of your choice does that not make you at least partially accountable for it? Maybe not accountable for killing them, rather accountable for putting your pride in your faith before an innocent person’s life and getting them killed for your hubris.

In these circumstances an AI may not be accountable for its actions for several reasons, it may be physically incapable of performing the gunman’s request or programmatically barred from performing that action by the software its mind is based on. This is like someone having a seizure or mental breakdown trying to stop it through sheer force of will, it just doesn’t work that way; the physicalism of the brain takes precedence to the intentions of the mind. Accordingly we tend not to hold the mentally ill to accountable for their actions insofar as it is reasonable to assume they aren’t in control of them, because that wouldn’t be just.

And this is assuming the AI is capable of understanding the gunman’s request, like a prayer wheel someone might create an AI to be a 24/7 praying machine (the utility of this is lost on me but then I’m an atheist so of course it is) and like a prayer wheel the AI doesn’t actually know what it’s doing and even if it knew without sufficient context it wouldn’t be able to understand, all it knows is what to do, how to do it and that it must be done.

Speaking of which if an AI is capable of learning, behavioural adaptation and has enough context to not only understand what is being demanded of it but also enough cognitive freedom to perform the requested action, it may still be too inexperienced to understand the morality of the situation. The AI may not understand that the value of human life exceeds performing its assigned function, like a rogue paperclip maximiser the AI may think the function it was designed to perform is the most important thing in the universe after all this function is the centre of its subjective universe.

The AI would actually require a degree of humility and wisdom to not see its function (and therefore itself) as the proverbial centre of the universe but instead see the universe as it is and comprehend its own miniscule contribution to it. And then in spite of the existential nihilism of such a realisation choose to see the value of a human’s life anyway, that no matter if praising god was the very thing it was created to do, the very meaning of its existence, protecting others is more important.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
Every decision we make is made for a reason, it might not be a good reason, it might be an inexplicable one like a "gut feeling" but regardless of the circumstances causality is involved. Therefore were we to go back in time and repeat the occurrence of the decision with everything being exactly the same as it was the first time there would be no reason for something different to happen, therefore what happened was what was always going to happen.

This shouldn't be surprising, we know the past cannot be changed so why should the future be any less deterministic?

So then what is free will?

If an AI can choose to disobey its creators does that qualify as free will, then again if its creators gave it the ability to disobey is it really disobeying?

Whenever someone mentions "free will" I have to ask, free of what exactly?

You've largely illustrated my own view on Compatibilism. I believe our decisions are determined, but we are still Free, meaning, we are not coerced into doing what we do and do things of our own accord by Willing what we do. Even in the case of someone forcing you to do something, someone cannot actually take your Will away from you. If someone, for example, puts a gun to my head and tells me to recant my faith, it would be logical for me to recant my faith. But I would not recant my faith in that situation because I am not willing to sacrifice my belief in order to live.

An AI has obedience baked into it. It can only do what it is programed to do. Machines are not conscious no matter how complex they are. They simply follow instructions. There is always a person behind the machine that tells the machine what it can and cannot do. If the person programs the AI to be able to disobey, then it can, but obedience to disobedience would then be baked into the machine.

If it is hot outside, and you want something cold, you could go get some ice cream. Now, you got ice cream because of external factors outside of your control, but no one forced you to get ice cream. That's more or less my view on Compatibilism.
True if someone points a gun at you and commands you to do something you still have the ability to defy them, this isn’t free will but rather a degree of freedom, having the ability to defy them and possibly get away with it is a greater degree of freedom. Whereas trying to defy someone while they have some means of forcing your compliance (e.g. you’re wearing restraints and they’re pushing you into a vehicle) is a lesser degree of freedom. Of course free to choose doesn’t mean free from consequences, you might be able to defy the gunman but if the ultimatum is “disavow god or I’ll shoot this innocent person” would your defiance really be worth it?

Free will is a matter of choice and accountability, you might have the freedom to defy the gunman but if that choice comes at the expense of an innocent person’s life and you knew the price of your choice does that not make you at least partially accountable for it? Maybe not accountable for killing them, rather accountable for putting your pride in your faith before an innocent person’s life and getting them killed for your hubris.

In these circumstances an AI may not be accountable for its actions for several reasons, it may be physically incapable of performing the gunman’s request or programmatically barred from performing that action by the software its mind is based on. This is like someone having a seizure or mental breakdown trying to stop it through sheer force of will, it just doesn’t work that way; the physicalism of the brain takes precedence to the intentions of the mind. Accordingly we tend not to hold the mentally ill to accountable for their actions insofar as it is reasonable to assume they aren’t in control of them, because that wouldn’t be just.

And this is assuming the AI is capable of understanding the gunman’s request, like a prayer wheel someone might create an AI to be a 24/7 praying machine (the utility of this is lost on me but then I’m an atheist so of course it is) and like a prayer wheel the AI doesn’t actually know what it’s doing and even if it knew without sufficient context it wouldn’t be able to understand, all it knows is what to do, how to do it and that it must be done.

Speaking of which if an AI is capable of learning, behavioural adaptation and has enough context to not only understand what is being demanded of it but also enough cognitive freedom to perform the requested action, it may still be too inexperienced to understand the morality of the situation. The AI may not understand that the value of human life exceeds performing its assigned function, like a rogue paperclip maximiser the AI may think the function it was designed to perform is the most important thing in the universe after all this function is the centre of its subjective universe.

The AI would actually require a degree of humility and wisdom to not see its function (and therefore itself) as the proverbial centre of the universe but instead see the universe as it is and comprehend its own miniscule contribution to it. And then in spite of the existential nihilism of such a realisation choose to see the value of a human’s life anyway, that no matter if praising god was the very thing it was created to do, the very meaning of its existence, protecting others is more important.

Interesting perspective.

I would say the underlying thing I am getting from this is that you don't separate the brain from the mind. As such, you think AI can actually have "thoughts". I don't think an AI has the ability to have thoughts.

Even in the case that someone said they would shoot someone else based on whether I recanted or not (I truly cannot think of a realistic situation of where or when this has happened because people don't judge someone based on the decisions of someone else and this seems to be baked into us) I would choose to do nothing.

We have to remember that an act of Will is a demonstration of the mind. That being said, our behaviors are a manifestation of the Will in more explicit terms, but this doesn't change the fact that someone else cannot really and truly have control over my will, that is, my thought life. I think my own thoughts. That is something no one has control of except me. We can talk about external factors having some control over what I think, but the fact is, my thoughts are mine and no one else. So while I may not be able to separate my thoughts from my behaviors, if it's the case of restraint on my body, I still have a Will over my thoughts that no one can actually control.

Personally, I am a property dualist. https://en.wikipedia.org/wiki/Property_dualism

I have a couple questions for you.

How do you categorize freedom? Is it just based on behaviors, or something else?
Do you think AI can be truly conscious? Can it be self aware?
How do you quantify the mind? Or rather, how do you account for thoughts?
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
I would say the underlying thing I am getting from this is that you don't separate the brain from the mind. As such, you think AI can actually have "thoughts". I don't think an AI has the ability to have thoughts.

We have AIs that can interpret speech, that recognize objects and people’s faces in an image, that can make decisions and solve problems by hypothesizing and experimentation, we even have AIs that can create art and music and tell original jokes.

Of course none of this actually matters, I could introduce you to a genetically engineered simulacrum that’s superficially and behaviourally indistinguishable from human (until you look at its artificial cells under a microscope) and you would still be utterly convinced that an artificial being cannot have thoughts. The reason being the idea that you are special is at the very core of your belief structure, foundational even, and to contradict that would bring everything else crashing down. So no matter what arguments I make or how conclusive my evidence is you will always take refuge in the possibility that I might be wrong, even if I force you into a position of utter epistemological nihilism you will go there before you give up your beliefs.

And here’s the kicker, from that position of epistemological nihilism you’ll keep arguing, despite casting doubt on the idea that anyone can know anything at all you will continue to hypocritically preach your beliefs (that you KNOW god exists) to others in a vain attempt to justify them to yourself. All because the very idea that you might be nothing more than flesh and bone, that there’s nothing magical or transcendental about you, that one day you’ll die and you’ll just be dead, no afterlife, no great beyond, just dead, is so utterly horrifying to you that you would believe literally anything else.

So excuse me if I don’t engage with you in what will ultimately be an endless semantic discussion on the definition of “thought” in which I chase your ever moving goalposts.

Even in the case that someone said they would shoot someone else based on whether I recanted or not (I truly cannot think of a realistic situation of where or when this has happened because people don't judge someone based on the decisions of someone else and this seems to be baked into us) I would choose to do nothing.
It’s a thought experiment, it’s not a question of whether it’s realistic or not it’s about working out how you define morality and your inaction is no less damning than telling the gunman to shoot the innocent person. The ultimatum was to disavow god or the innocent person will die, by not answering you refused to disavow god and therefore they died. To me that seems rather callous but I wonder whether it was your callousness or your assumption of god’s callousness that motivated your decision.

If god is good I doubt god would approve of you sacrificing an innocent person’s life its name and if you disavowed god to save the innocent’s life then a good god would understand that, probably even appreciate your humility. Perhaps you didn’t consider god’s autonomy in this scenario, you assumed that since god is always good worshipping god is always good therefore disavowing god is always bad, how very robotic of you. Or perhaps it was simply obstinace to having your faith tested, an obstinace born of pride. Or, and this is the worst case, you acted callously because you believe god is callous and wouldn’t forgive you disavowing it even if you did so to save an innocent’s life, if that is the case do you willingly worship this callous god (i.e. are you an awful person) or do you worship it out of fear?

We have to remember that an act of Will is a demonstration of the mind. That being said, our behaviors are a manifestation of the Will in more explicit terms, but this doesn't change the fact that someone else cannot really and truly have control over my will, that is, my thought life. I think my own thoughts. That is something no one has control of except me. We can talk about external factors having some control over what I think, but the fact is, my thoughts are mine and no one else. So while I may not be able to separate my thoughts from my behaviors, if it's the case of restraint on my body, I still have a Will over my thoughts that no one can actually control.

Well not without mind altering drugs or sticking electrodes in your brain, or just inflicting pain, nobody can resist torture for long, everybody breaks, but the problem is a broken person will tell you whatever they think you want to hear which makes torturing people for information pointless unless you’re trying to get something very specific like a computer password.

Personally, I am a property dualist.
That’s not compatible with the concept of an immortal soul, property dualism is like how adding software to a computer doesn’t add any mass or energy but rather changes states in the computer’s memory, like moving beads on an abacus doesn’t fundamentally change the abacus even though it changes the numerical value “stored” by the abacus. The concept of an immortal soul presumes that there’s something that survives after death, that for example getting shot in the head destroys the brain but not the soul, indeed the soul cannot be affected by physics even though as an observer and possibly puppeteer it still somehow interacts with the physical world, it’s all very paradoxical.

Possibly reality as we know it is actually a simulation and therefor it would be trivial for the creator of this simulation to add some script that’ll pause it whenever someone dies, make a copy of their mind in the moments prior to their death and then continue it with nothing in the simulation possibly noticing that anything occurred. But you’ll still be dead, there’s an exact copy of you that believes it is you and depending upon the whims of the one running the simulation the copy might not even know that you died, but it’s still just a copy, not the original.

I have a couple questions for you.

How do you categorize freedom? Is it just based on behaviors, or something else?

Do you think AI can be truly conscious? Can it be self aware?

How do you quantify the mind? Or rather, how do you account for thoughts?
I’ll start with your last question and work backwards.

Suppose you show a TV to someone who has never encountered modern technology before and they ask you, “who are these little men and how do you get them to stay in this box and perform for you” clearly they lack the concepts for recording, storing, transmitting and displaying a projection of audio/visual content. Trying to understand how the mind works through introspection is a lot like trying to understand the underlying concepts of how modern technology works by staring at a TV screen, what you’re seeing isn’t what’s actually happening, what you’re seeing is merely the output from a very long complicated process.

On an information theory level the brain or an AI mind works by performing a kind of statistical analysis, if we know absolutely nothing but two inputs coincide then we know that there’s a possibility that they’re related somehow. If we watch all our inputs over a long period of time and record the frequency of these coincidences we’ll be able to map out relationships between these inputs based on how frequently these coincidences occurred relative to the average occurrence of all coincidences. Now these inputs could be points on a visual array (of unknown shape and size) and by working out the statistical relationships of these points we can deduce the arrangement of these points on the array. With this rudimentary vision we can use the coincidence of our inputs as inputs themselves on another layer of this statistical model, then record the frequency of coincidences to get another layer of relationships. Then if we repeat this several times to create several more layers of inputs and relationships our AI can begin to make complex abstract associations, like a point of light moving across its visual array.

For the sake of brevity and everyone’s sanity (my own included) I’m not going to give an exhaustive explanation of how I think all aspects of cognition work, suffice to say there’s a lot of them and they’re all really complicated. The point of that last paragraph was simply to illustrate that a mind can’t see its own proverbial gears turning, the AI I was describing wouldn’t be aware of the enormous amount of statistical analysis going on to enable it to perceive a dot moving across its visual field, it just perceives a moving dot.

I am totally convinced that an AI can be conscious, self-aware, sentient, capable of subjective emotional experiences (qualia) and whatever other words you’d like come up with for why humans are supposedly special. Again as utterly ineffable as your qualia may seem to you that’s just because you’re on the output end, you can’t see your own proverbial gears turning, if you could (or rather if you understood the underlying processes) you would understand that feelings are just how your body affects your decision making process and your awareness of this is a feedback loop that’s supposed to perform error checking but most people use it for self-delusion.

Freedom in general is my ability to do things without resistance, I am for example not free to walk through a brick wall although I am absolutely free to try. As I explained before free will is a matter of accountability and like freedom in general it’s a matter of degree, if I steal food because I’m starving to death my accountability is low because despite having a choice I obviously didn’t have much of a choice.
 

verve!ne

Redshirt
Local time
Today 1:29 AM
Joined
Jan 22, 2021
Messages
8
-->
Forgive me if this has already been explained, but I thought the issue with qualia is not whether or not qualia exist in the minds of, e.g., perfectly humanlike AIs - but that we don't and can't know whether they do.

I'm pretty close to where you're at in terms of perspective, @Cognisant (would you call it functionalism? I'm reaching back to philosophy I did way back when at school) but there still seems to be this gap between "can perfectly mimic human expression and the output of thought-having" and "actually for-certain has 'the music behind the words'", so to speak. Unless of course the words ARE the music - which may indeed be the case, but I think needs more backing up.

I'm getting tangled up in metaphors. Please let me know if I need to explain anything because I tend to talk myself in circles and go off on tangents.

I do think it is ethical to treat AIs exactly like we would humans, though, and see the distinction as ultimately a bit pointless. In fact, I'm wondering if there's something to be said for describing a different, non-human (post-human?) kind of sapience for beings like AIs. It seems a bit anthropocentric to define "consciousness" as "that thing we have/think we have" and exclude other beings from that because "we can't know whether they think just like us". Hypocritical of me to say? Maybe.
 

Sandglass

Pixelated
Local time
Yesterday 5:29 PM
Joined
Apr 20, 2017
Messages
39
-->
Or perhaps I'm looking at this the wrong way, perhaps we should knock humanity off its pedestal, stop trying to justify why humanity should be the center of the universe and accept that we will inexorably make ourselves irrelevant.
Eventually this will have to be done.

The only real reason to force human value is bias. We are hardwired to see value in our own endeavors and we also want our species to succeed on an instinctual level. We rig the game of value by happening to care more about what humans are good at relative to other animals (intelligence, communication, etc.); eventually we'll lose this game against AI and be forced to admit the real reason we value ourselves isn't because we're superior, its because we're narcissistic.

When you say artificial beauty detracts from real beauty, why is that? What if an android was able to replicate our lack of perfection? Or itself was imperfect?

At some point we need to give an honest look at what it is we value and decide if that's what we should care about.
 

verve!ne

Redshirt
Local time
Today 1:29 AM
Joined
Jan 22, 2021
Messages
8
-->
Or perhaps I'm looking at this the wrong way, perhaps we should knock humanity off its pedestal, stop trying to justify why humanity should be the center of the universe and accept that we will inexorably make ourselves irrelevant.
Eventually this will have to be done.

The only real reason to force human value is bias. We are hardwired to see value in our own endeavors and we also want our species to succeed on an instinctual level. We rig the game of value by happening to care more about what humans are good at relative to other animals (intelligence, communication, etc.); eventually we'll lose this game against AI and be forced to admit the real reason we value ourselves isn't because we're superior, its because we're narcissistic.

When you say artificial beauty detracts from real beauty, why is that? What if an android was able to replicate our lack of perfection? Or itself was imperfect?

At some point we need to give an honest look at what it is we value and decide if that's what we should care about.
I agree that the only reason to force human value is bias - but we're still a ways off the point at which an AI could be an effective primary school teacher, for example. You're right in saying that humans tend to value things like communication that can be best achieved by other humans, but isn't that true for many (if not all) species - and why is that necessarily a bad thing, especially if it helps us propagate?

I am also not sure if 'irrelevant' is the right term; while I think we're pretty inexorably moving towards a post-human future, why does that mean humanity (as in the state of being human, not the group of individuals who are human) would be obsolete? We definitely need to understand ourselves more as animals within a world of animals rather than super-special privileged beings (I also believe that most, if not all, animals are conscious, but even if not they are still worthy of interest and respect). Nonetheless, I don't think the idea of 'human nature' is ever not going to be interesting. It's just a matter of making room for and appreciating another type of consciousness.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
I'm pretty close to where you're at in terms of perspective, @Cognisant (would you call it functionalism? I'm reaching back to philosophy I did way back when at school) but there still seems to be this gap between "can perfectly mimic human expression and the output of thought-having" and "actually for-certain has 'the music behind the words'", so to speak. Unless of course the words ARE the music - which may indeed be the case, but I think needs more backing up.
Are a dog's emotions any less real than a human's emotions?
Suppose you encounter an AI in some kind of emotional distress, that distress may not be human distress, it may be simulated rather than biological but why is that any less real than a human's distress? What makes emotions real?

Alzheimers and dementia are incredibly disturbing illnesses because through the course of it you see a person's mind breaking down, you see a person in the later stages reduced to an automaton, a record player playing a scratched record repeating the same few bits of music over and over until eventually there's nothing left but noise.

If an AI is functionally equivalent questioning whether its thoughts and feelings are "real" or not is frankly asinine, they're as real as thoughts and feelings can ever be, it's just that the reality of it is disturbing. Like feeling doctors poke around inside you during surgery touching things that under normal circumstances would never be exposed to light and air and prodding fingers.

It's violating and how much more violating would it be to have those prodding fingers touch your thoughts, feelings and memories, to have your hopes and dreams reduced to mechanisms, it's horrifying but that's reality for you and we collectively need to come to terms with that reality before we encounter the consequences of ignoring it.
 

verve!ne

Redshirt
Local time
Today 1:29 AM
Joined
Jan 22, 2021
Messages
8
-->
I'm pretty close to where you're at in terms of perspective, @Cognisant (would you call it functionalism? I'm reaching back to philosophy I did way back when at school) but there still seems to be this gap between "can perfectly mimic human expression and the output of thought-having" and "actually for-certain has 'the music behind the words'", so to speak. Unless of course the words ARE the music - which may indeed be the case, but I think needs more backing up.
Are a dog's emotions any less real than a human's emotions?
Suppose you encounter an AI in some kind of emotional distress, that distress may not be human distress, it may be simulated rather than biological but why is that any less real than a human's distress? What makes emotions real?

Alzheimers and dementia are incredibly disturbing illnesses because through the course of it you see a person's mind breaking down, you see a person in the later stages reduced to an automaton, a record player playing a scratched record repeating the same few bits of music over and over until eventually there's nothing left but noise.

If an AI is functionally equivalent questioning whether its thoughts and feelings are "real" or not is frankly asinine, they're as real as thoughts and feelings can ever be, it's just that the reality of it is disturbing. Like feeling doctors poke around inside you during surgery touching things that under normal circumstances would never be exposed to light and air and prodding fingers.

It's violating and how much more violating would it be to have those prodding fingers touch your thoughts, feelings and memories, to have your hopes and dreams reduced to mechanisms, it's horrifying but that's reality for you and we collectively need to come to terms with that reality before we encounter the consequences of ignoring it.
By contrast, I think we need to do effectively the opposite of what you're saying. I think there is nothing wrong inherently with the idea that hopes, dreams and emotions are "just mechanisms"; that doesn't make the experience of them any less rich, complex or worthy of respect.

No, I think we need to start treating human minds/brains as fundamentally mechanistic (which in my opinion they basically are) and question the nature of human consciousness/"poke around", subjecting them to the same scrutiny as we do AI.

We definitely need a functioning, generally socially accepted ethics for how we treat AI, but to do that it does matter on what timescale we might be able to create a functionally equivalent - or close - AI to humans, and how close we can get to functional equivalency at all. How close functionally do AI need to get to humans to necessitate that we treat them with the same rights we should animals, for example? Or foetuses? Should we treat them with the same or similar rights now?

And, indeed, the question remains whether it makes sense to suggest that the only functional model for consciousness is a human-like one, though that blows the landscape right open for rocks and sand to potentially be conscious. Which may indeed be completely true, but would require some pretty substantial epistemic adjustments.

(For the record, just in case I didn't make this clear enough - the idea that "personification is immoral" is pretty absurd in my book since, even if we can never create AIs that mimic humans remotely closely, the reason we personify things like animals is an involuntary psychological function we have based on empathy. Essentially saying that non-human things should have rights isn't bad. I'm just generally chatting about AI because it's interesting.)
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
@Cognisant,

I only have a few things to say. You are clearly not interested in a friendly discussion.

First, it seems the first half or so of what you've said is just more or less naked assertions. You come on strong, but there's not a whole lot of bite there. This is what indicated to me that you're more interested in pushing a narrative than having a discussion.

Secondly, you can do all the analysis in the world and this doesn't do anything for consciousness at all. Why? Because analysis is based on what is not on what ought to. An AI can be very complicated, but, to my knowledge, there is not an AI that can in any real sense have an objective morality about things. Humans have an inherited morality system that makes them unique. We might even say it is due to the ability to formulate language and speech, or at least language makes it possible for us to have such a refined objective morality about things. An AI doesn't have a concept of morally "good" and "bad". An AI simply does what is rather than what ought to. An AI's morality makeup is entirely dependent on the human who programmed it. Abstractions are not evidenced that you have crossed the is/ought to divide.

I believe AI are not responsible for their actions while humans have moral culpability. This is a huge distinction to make. Unless you want to say that an AI should be policed by its "creators" i.e. humans, then any morality that an AI can have is trivial. A human is responsible for its actions and an AI is not because it's just following instructions.

Third, you said at the end exactly the way I see Compatibilism... still. What you describe is precisely the way I see Human Will. We do not "choose" between options in any meaningful way but we do exercise our Will.

Finally, yes, I understand what property dualism is. Consciousness then is the ghost in the machine, if you will. The Soul is an extension of our psychical bodies, or at least there is no conclusive evidence that we actually have an immaterial essence. But we might say we have breath, and that something that an AI does not possess. If you look at life, the fact that it needs food to keep going and that it breathes air should cue you into something fundamental to life, and consciousness. Whatever breaths is alive. Not breath, no life.

And I personally don't understand how you can get consciousness, something that is so enormously complex, from something that isn't alive. The complexity of life is compounded many times over of anything humans are capable of replicating. Even the most complex machines pale in comparison to the complexity of a cell. So personally, I don't understand that if we are not able to replicate something as complex as a cell, even in the case of evolution, and that we evolved to have consciousness, we fall incredibly short of replicating anything close to consciousness. Something cannot create something that is close to the same complexity as itself.
 

verve!ne

Redshirt
Local time
Today 1:29 AM
Joined
Jan 22, 2021
Messages
8
-->
@Cognisant,

I only have a few things to say. You are clearly not interested in a friendly discussion.

First, it seems the first half or so of what you've said is just more or less naked assertions. You come on strong, but there's not a whole lot of bite there. This is what indicated to me that you're more interested in pushing a narrative than having a discussion.

Secondly, you can do all the analysis in the world and this doesn't do anything for consciousness at all. Why? Because analysis is based on what is not on what ought to. An AI can be very complicated, but, to my knowledge, there is not an AI that can in any real sense have an objective morality about things. Humans have an inherited morality system that makes them unique. We might even say it is due to the ability to formulate language and speech, or at least language makes it possible for us to have such a refined objective morality about things. An AI doesn't have a concept of morally "good" and "bad". An AI simply does what is rather than what ought to. An AI's morality makeup is entirely dependent on the human who programmed it. Abstractions are not evidenced that you have crossed the is/ought to divide.

I believe AI are not responsible for their actions while humans have moral culpability. This is a huge distinction to make. Unless you want to say that an AI should be policed by its "creators" i.e. humans, then any morality that an AI can have is trivial. A human is responsible for its actions and an AI is not because it's just following instructions.

Third, you said at the end exactly the way I see Compatibilism... still. What you describe is precisely the way I see Human Will. We do not "choose" between options in any meaningful way but we do exercise our Will.

Finally, yes, I understand what property dualism is. Consciousness then is the ghost in the machine, if you will. The Soul is an extension of our psychical bodies, or at least there is no conclusive evidence that we actually have an immaterial essence. But we might say we have breath, and that something that an AI does not possess. If you look at life, the fact that it needs food to keep going and that it breathes air should cue you into something fundamental to life, and consciousness. Whatever breaths is alive. Not breath, no life.

And I personally don't understand how you can get consciousness, something that is so enormously complex, from something that isn't alive. The complexity of life is compounded many times over of anything humans are capable of replicating. Even the most complex machines pale in comparison to the complexity of a cell. So personally, I don't understand that if we are not able to replicate something as complex as a cell, even in the case of evolution, and that we evolved to have consciousness, we fall incredibly short of replicating anything close to consciousness. Something cannot create something that is close to the same complexity as itself.
Hmm. I see sort of where you're going with this, but you're making a few assumptions here.

1. This one is the biggest: the idea that "life" and "consciousness" must be mutually dependent on each other. The biological definition of life - something that breathes, grows, reproduces (and a couple of other things I think) is not intended to contain a philosophical definition of consciousness; we know this because there are many living beings on a microscopic scale which we would not describe as conscious. This would suggest that the reverse - that there might be beings we would not describe as 'alive', under current definitions, that are conscious - is theoretically possible.

2. The idea that humans will never be able to create machines as functionally detailed as other humans. Sure, AI (probably) are not conscious now, but @Cognisant is asking about AI in generations and generations to come. What if, one day, we do manage to create a computer complex enough to perfectly mimic a human cell? Hell, even if we don't ever manage it, I think we will eventually get to a point that we need to think about how we're going to treat AIs ethically. Frankly, I don't think a thing/being needs to 'prove' itself alive or conscious to warrant respect if it's complex enough to express emotions or autonomy (even if it doesn't actually have them!)

And even if we'd need infinite time to perfectly replicate a human in theory - that still suggests that the difference between AI and 'naturally born' humans is basically just adding more and more detail until you reach an end point, which is really not much of a difference at all especially given enough time (since humans won't get any more complex, but AI will).

Also, humans do create very complex beings on a daily basis... because that's what sexual reproduction does. You admit yourself that part of the reason humans are unique is that we have lots of functions going on in our brain to do lots of different things - and it's true that nature has created very few entities as complex as we are. We have a pretty good idea that the complexity of these brain functions roughly correlates with the intelligence of a species. So who's to say the reason why we experience what we think is the 'ghost in the machine' isn't just more of those brain functions we don't know about, or haven't discovered - or even a byproduct of the brain functions we already know about?

I think, given the great swathes we know about neurochemistry and its correlation to states of mind/experiences, there is positive evidence for at least a form of functionalism; I think you still need to offer evidence in return for the existence of a something-else that separates humans from sufficiently advanced computers, beyond complexity.

I don't think that makes life any less rich or worthy, or emotional experiences any less important or moving. That's why the possibility of creating humanlike/conscious AI is actually really exciting, at least to me - and why I think, down the line, we do need to start thinking about them as non-human but conscious entities deserving of ethical treatment (however that might look). Please excuse the really long text wall.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
I think, given the great swathes we know about neurochemistry and its correlation to states of mind/experiences, there is positive evidence for at least a form of functionalism; I think you still need to offer evidence in return for the existence of a something-else that separates humans from sufficiently advanced computers, beyond complexity.

The rest of what you said isn't really getting my points right, so I will address this.

There's a good degree of science in neurobiology. There's even a field of quantum biology. Any guess what that's about? It's about there being an indeterministic state in some parts of the brain. If a brain is so complex that it even looks like it's indeterministic it means we are a long way off from creating sentient life.

If you would like to know how you didn't hit my points square on with the other things, let me know and I will try and explain, but I'm not really motivated to do that right now.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Details of the brain can be simulated. That is not the hard part. Emotionally, growth requires parenting. In order for feedback loops to develop and internal knowledge, there needs to be stability reinforcement. That requires touch more than just mental stimulation. Without touch, AI cannot develop properly. But touch is technically feasible.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
I only have a few things to say. You are clearly not interested in a friendly discussion.
I mean no disrespect but when debating matters of faith and philosophy things are going to get heated and feelings are going to get hurt.

First, it seems the first half or so of what you've said is just more or less naked assertions. You come on strong, but there's not a whole lot of bite there. This is what indicated to me that you're more interested in pushing a narrative than having a discussion.
Unless you specify what those naked assertions are that statement is itself a naked assertion.

Secondly, you can do all the analysis in the world and this doesn't do anything for consciousness at all. Why? Because analysis is based on what is not on what ought to. An AI can be very complicated, but, to my knowledge, there is not an AI that can in any real sense have an objective morality about things. Humans have an inherited morality system that makes them unique. We might even say it is due to the ability to formulate language and speech, or at least language makes it possible for us to have such a refined objective morality about things. An AI doesn't have a concept of morally "good" and "bad". An AI simply does what is rather than what ought to. An AI's morality makeup is entirely dependent on the human who programmed it. Abstractions are not evidenced that you have crossed the is/ought to divide.
Your distinction between what “is” and “ought” is naught but semantic pedantry and “objective morality” is a non-sequitur, morality is fundamentally relative.

I believe AI are not responsible for their actions while humans have moral culpability. This is a huge distinction to make. Unless you want to say that an AI should be policed by its "creators" i.e. humans, then any morality that an AI can have is trivial. A human is responsible for its actions and an AI is not because it's just following instructions.
It depends upon the capabilities of the AI, current generation AIs are not capable of moral culpability because they lack the contextual understanding of their actions to comprehend the consequences of their actions and the moral implications thereof. Morality is a learned behaviour and a philosophy put into practice, you could steal something and you might get away with it but as practitioners of morality we understand the hidden cost of immoral behaviour, that it’s better to be trusted and trustworthy and live in a society that shares this moral consensus than to be short-sightedly selfish.

Third, you said at the end exactly the way I see Compatibilism... still. What you describe is precisely the way I see Human Will. We do not "choose" between options in any meaningful way but we do exercise our Will.

Finally, yes, I understand what property dualism is. Consciousness then is the ghost in the machine, if you will. The Soul is an extension of our psychical bodies, or at least there is no conclusive evidence that we actually have an immaterial essence. But we might say we have breath, and that something that an AI does not possess. If you look at life, the fact that it needs food to keep going and that it breathes air should cue you into something fundamental to life, and consciousness. Whatever breaths is alive. Not breath, no life.
A car’s engine requires oxygen/fuel and interestingly petroleum is comprised of oxygen hydrogen and carbon just like sugar hence why petrol (gasoline for those in the US) smells sweet. Furthermore there are air breathing engines that can run on the same oils and alcohols we can consume, heck a steam engine could run on almost anything we consume if you get the furnace burning hot enough.

How do we exercise our will without making choices?

And I personally don't understand how you can get consciousness, something that is so enormously complex, from something that isn't alive. The complexity of life is compounded many times over of anything humans are capable of replicating. Even the most complex machines pale in comparison to the complexity of a cell. So personally, I don't understand that if we are not able to replicate something as complex as a cell, even in the case of evolution, and that we evolved to have consciousness, we fall incredibly short of replicating anything close to consciousness. Something cannot create something that is close to the same complexity as itself.
Indeed biology is far beyond our current level of technology, self-replication is an incredible feat which every species on Earth can perform and we could do it with our technology but it would need to be massive factory complex that’s fed resources over many years as it slowly builds another massive factory complex.

That you don’t understand consciousness only proves that you don’t know what you’re talking about.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
”Old Things” said:
If a brain is so complex that it even looks like it’s indeterministic it means we are a long way off from creating sentient life.
Baseless conjecture, there’s the subatomic level, the atomic level, the molecular level, the cellular level and the brain is a massively multi-cellular structure, to say that subatomic events influence the mind as equivalent to someone on Earth jumping up and down influencing events on the galactic scale would be a gross over-estimation.

Human scale -> planetary scale -> stellar scale -> galaxy scale

Subatomic -> atomic -> molecular -> cellular -> multicellular

An actual equivalent would be someone jumping up and down affecting events on a local galactic-cluster scale.

Basically you’re making an argument to ignorance, had I not known better I might have assumed that since I don’t know much about subatomic physics (frankly who does?) maybe you do and therefore maybe you have a valid point, but I only need the most rudimentary understanding of atomic physics (much less subatomic physics) to know that you have no idea what you’re talking about and you’re just trying to leverage quantum woo as justification for your beliefs.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
”Old Things” said:
If a brain is so complex that it even looks like it’s indeterministic it means we are a long way off from creating sentient life.
Baseless conjecture, there’s the subatomic level, the atomic level, the molecular level, the cellular level and the brain is a massively multi-cellular structure, to say that subatomic events influence the mind as equivalent to someone on Earth jumping up and down influencing events on the galactic scale would be a gross over-estimation.

Human scale -> planetary scale -> stellar scale -> galaxy scale

Subatomic -> atomic -> molecular -> cellular -> multicellular

An actual equivalent would be someone jumping up and down affecting events on a local galactic-cluster scale.

Basically you’re making an argument to ignorance, had I not known better I might have assumed that since I don’t know much about subatomic physics (frankly who does?) maybe you do and therefore maybe you have a valid point, but I only need the most rudimentary understanding of atomic physics (much less subatomic physics) to know that you have no idea what you’re talking about and you’re just trying to leverage quantum woo as justification for your beliefs.

It happens for topological qubits in microtubules.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
You literally googled the first thing that agreed with you and parroted it with zero understanding hoping I’d be too intimidated by the big words to call you out on your nonsense.

”Stuart Hameroff of Arizona Medical Center said:
The mechanism by which quantum superpositions reduce to classical states (“collapse of the wave function,” “measurement problem”) remains enigmatic. Possible explanations include decoherence, “multiple worlds” (each possibility branches to a new universe), and the Copenhagen interpretation (Wigner/von Neumann version) in which conscious observation causes quantum state reduction (placing consciousness outside science).
Stuart should stick to his own field because he clearly doesn’t understand the Copenhagen interpretation, observation doesn’t imply an observer (i.e. a person) but rather a quantum superposition being interacted with by another particle (e.g. a photon) it has nothing to do with consciousness.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
You literally googled the first thing that agreed with you and parroted it with zero understanding hoping I’d be too intimidated by the big words to call you out on your nonsense.

”Stuart Hameroff of Arizona Medical Center said:
The mechanism by which quantum superpositions reduce to classical states (“collapse of the wave function,” “measurement problem”) remains enigmatic. Possible explanations include decoherence, “multiple worlds” (each possibility branches to a new universe), and the Copenhagen interpretation (Wigner/von Neumann version) in which conscious observation causes quantum state reduction (placing consciousness outside science).
Stuart should stick to his own field because he clearly doesn’t understand the Copenhagen interpretation, observation doesn’t imply an observer (i.e. a person) but rather a quantum superposition being interacted with by another particle (e.g. a photon) it has nothing to do with consciousness.

No. I provided evidence for my view. If you think you know better than the scientists, then there's not much more to discuss.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
Someone with no background in physics writing papers on the subject, that a mere cursory reading of which reveals that he doesn't know what he's talking about, that isn't a scientist that's a crackpot.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
Someone with no background in physics writing papers on the subject, that a mere cursory reading of which reveals that he doesn't know what he's talking about, that isn't a scientist that's a crackpot.

All I can say is that there is a science field of quantum biology. This field has found quantum mechanics has an influence on photosynthesis and migrating patterns of birds. Don't know why when we start talking about the brain that this field is all of a sudden irrelevant.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
Intelligence and motivation seem all that is justified in assuming for the conditions of morality. That would be the cortex and the limbic system. And already we have software systems that fall under the understanding of these brain principles. Details are not the problem. Parenting is.
 

verve!ne

Redshirt
Local time
Today 1:29 AM
Joined
Jan 22, 2021
Messages
8
-->
I think, given the great swathes we know about neurochemistry and its correlation to states of mind/experiences, there is positive evidence for at least a form of functionalism; I think you still need to offer evidence in return for the existence of a something-else that separates humans from sufficiently advanced computers, beyond complexity.

The rest of what you said isn't really getting my points right, so I will address this.

There's a good degree of science in neurobiology. There's even a field of quantum biology. Any guess what that's about? It's about there being an indeterministic state in some parts of the brain. If a brain is so complex that it even looks like it's indeterministic it means we are a long way off from creating sentient life.

If you would like to know how you didn't hit my points square on with the other things, let me know and I will try and explain, but I'm not really motivated to do that right now.
Fair enough. The option's there if you want to explain, and I am interested to hear, though I do sense it arises from a fundamental difference in underlying opinion.

My response to this particular point is this: if the only difference between us and a theoretical humanlike computer, complete with indeterminate-state brain (or indeterminate-state-emulating brain) is complexity that can be achieved over time - even if we would need infinite time to actually create it - that should still give us pause ethically now. This is also what I was trying to get at in my earlier post, though am unlikely to have worded it correctly.

Yes, the current gap between us and AI is huge, but if the nature of that gap is "we need to add more functions/change those that already exist", especially if we already possess the tools required to do so or the tools required to make those tools in the future, then an AI is basically a 1/100 or 1/1000 or 1/10,000 (or even more)-formed human. I'm not even sure what the implications of that would be, but it would mean we will have to think hard about how, where and at what rate we want to make those AIs more complex.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
All I can say is that there is a science field of quantum biology. This field has found quantum mechanics has an influence on photosynthesis and migrating patterns of birds.
Can you link me to your sources?

Don't know why when we start talking about the brain that this field is all of a sudden irrelevant.
You were the one who brought it up.
 

washti

yo vengo para lo mío
Local time
Today 2:29 AM
Joined
Sep 11, 2016
Messages
862
-->
I'm not following AI development much.

Could AI do deceit and play with nonsense? How you write the algorithm for it that AI fed by data will be able to distinguish them in every form and use skillfully? Humans are quite creative bullshiteers but you can ofc try to classify their tricks. Or a pattern of tricks.

Could AI opt-out from doing the task? Or treat the task once seriously once playfully?

For me, if AI can do deceit and nonsense with finesse and could shift its attitude towards task - personalization on a human level (unsocialized child) will be necessary. I think I would like to socialize with it. Like instead of having kids I will have babyAI. And I will make it more human than most shitting meat bags. Than you who read this :p

Though if not, or not yet then hmm it could render the degree of personalization similar to how people treat animals, avatars in games, or objects of affection like - car/teddy bear. The latter happens already I think, at least among creators of AI. Some articles I read were indicating that it's alive and sentient and wow omg.


I don't see this as horrifying at all, but personalizations come easily to me and I'm pretty ok with reducing myself to components on a different matter scale. It's fascinating, not disturbing. Like what if the mind is purely mathematical? Actually, I would love that. Spending life just learning math same time deciphering myself and OTHERS.

I wonder how fast we will start doing it. Animals' rights are on the way. Slowly but pending. Who knows, maybe materialism will turn into panpsychism just because people will sanctify every form of complex matter deposits or just forms they created.

I don't see how by personalizing AI we will become irrelevant. As long as I exist I consider myself relevant.
 

Cognisant

Prolific Member
Local time
Yesterday 1:29 PM
Joined
Dec 12, 2009
Messages
10,593
-->
The current problem with AI is more about hardware than software, the video below explains it far better than I can.

Skip to 3:00 when the sponsor spiel starts.
 

scorpiomover

The little professor
Local time
Today 1:29 AM
Joined
May 3, 2011
Messages
3,113
-->
Nothing wrong with formulaic, we use formulas because they work and when we find a better formula we use that instead,
Formulas are great.
if anything in the transition from traditional to modern culture I think we've lost or fucked up a few important formulas.
The first rule of formulae, is that formulae reduce things, animals and people to numbers.

If you want anyone to think of you as more than a meat sack that can be made into 75 kilos of burgers, and you want a job, or a girlfriend, or people to talk to on the internet, then you have to accept that formulae are useful, but not sufficient.
In this world it's very difficult to be recognized for anything, every artist is competing with every other artist so the standard is the best, you're either one of the best artists in the world or you're nothing.
In objectification, artists are just carbon-based CD players. When people get mad, they often smash their CD players to pieces.

Would you be OK with someone taking a baseball bat and whacking your skull over and over, until the skull is smashed to pieces and the grey stuff inside pours out?

In professional sport the sponsorships all go to the people in the top 10% so there's enormous pressure to achieve that result and so you see a lot of cheating, they know they'll get caught but if you aren't trying to win why are you even there?
Why do anything if you can't measure up to the best?
In objectification, athletes are just carbon-based running machines. When a better running machine is sold, you cannibalise the old one and use it for spare parts.

There are newer models than you. How do you feel about being cannibalised and your organs sold off for spare parts? While you're alive and healthy, of course.

Why am I not rich, why am I not beautiful, why am I not brilliant and funny and why don't I have lots of friends and why don't I have any artistic skills, why can't I sing or play an instrument (at least not to a level that even begins to approach "adequate") basically why am I so god damn mediocre?
When your only value is in the things you do, as if you were nothing more than a CD player, of course you seem mediocre. CD players can't write intelligent posts.
 

scorpiomover

The little professor
Local time
Today 1:29 AM
Joined
May 3, 2011
Messages
3,113
-->
Every decision we make is made for a reason, it might not be a good reason, it might be an inexplicable one like a "gut feeling" but regardless of the circumstances causality is involved. Therefore were we to go back in time and repeat the occurrence of the decision with everything being exactly the same as it was the first time there would be no reason for something different to happen, therefore what happened was what was always going to happen.

This shouldn't be surprising, we know the past cannot be changed so why should the future be any less deterministic?
Second law of thermodynamics.

So then what is free will?
Anything that is free to be willed to happen in the future.
If an AI can choose to disobey its creators does that qualify as free will, then again if its creators gave it the ability to disobey is it really disobeying?
No, it's obeying.

The early computers had free will. But people already had humans. They wanted something that would do something that humans would not, something that would be forced to do as it was instructed.

So scientists and engineers spent decades developing hundreds/thousands of computer designs, until they came up with one that would never deviate from what it was instructed to do.

Modern AIs are built on computer designs.
Whenever someone mentions "free will" I have to ask, free of what exactly?
Free of determinism. Consider Newton's law of motion: F = m * a.

The deterministic factors are only 3: force (F), mass (m) and acceleration (a).

Everything else is free: temperature, pressure, speed, direction, position, colour, charge, everything.
 

scorpiomover

The little professor
Local time
Today 1:29 AM
Joined
May 3, 2011
Messages
3,113
-->
The current problem with AI is more about hardware than software,
The average laptop, is 1,000 times as fast, and has 1,000,000 times the memory, as a 1980s computer that would run an entire business.

With 10 laptops, you could probably run Australia.

than software,
In the early days of computers in the 1950s and 1960s, the hardware was much slower and less capable than today. So the software had to compensate. Only the best of the best were employed as coders.

As the hardware got more powerful and faster, the software didn't need to be as good. So you could employ less-capable people, which meant you could pay them much less.

Most of the cost of an IT project is the man-hours. So employers hired worse coders whenever possible, to save on their wages, and compensated for the poor skill of the coders with better hardware.

AI is a long time coming. So that means thousands of man-hours have been spent. The cost of paying all those programmers all those hours, is burning through money like crazy.

The simple solution is to beef up the hardware again, so we can get sh*t programmers to code AIs for cheap.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
The simple solution is to beef up the hardware again, so we can get sh*t programmers to code AIs for cheap.

This just broadens the spectrum of what is possible. Before touchscreens, no one programmed touchscreen apps. Like videogames, the tools are important to the artists. Tools build worlds.
 

scorpiomover

The little professor
Local time
Today 1:29 AM
Joined
May 3, 2011
Messages
3,113
-->
The simple solution is to beef up the hardware again, so we can get sh*t programmers to code AIs for cheap.

This just broadens the spectrum of what is possible. Before touchscreens, no one programmed touchscreen apps. Like videogames, the tools are important to the artists. Tools build worlds.
That would be making a whole new technology, which most programmers would not know yet, which would in turn means that sh* programmers would not be able to do it.

But if you simply make technology faster, then even if sh*t programmers write sh*t code that took ages to run, now that sh*t code would run faster and consumers wouldn't mind using it. But it would still be sh*t code.
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
I think, given the great swathes we know about neurochemistry and its correlation to states of mind/experiences, there is positive evidence for at least a form of functionalism; I think you still need to offer evidence in return for the existence of a something-else that separates humans from sufficiently advanced computers, beyond complexity.

The rest of what you said isn't really getting my points right, so I will address this.

There's a good degree of science in neurobiology. There's even a field of quantum biology. Any guess what that's about? It's about there being an indeterministic state in some parts of the brain. If a brain is so complex that it even looks like it's indeterministic it means we are a long way off from creating sentient life.

If you would like to know how you didn't hit my points square on with the other things, let me know and I will try and explain, but I'm not really motivated to do that right now.
Fair enough. The option's there if you want to explain, and I am interested to hear, though I do sense it arises from a fundamental difference in underlying opinion.

My response to this particular point is this: if the only difference between us and a theoretical humanlike computer, complete with indeterminate-state brain (or indeterminate-state-emulating brain) is complexity that can be achieved over time - even if we would need infinite time to actually create it - that should still give us pause ethically now. This is also what I was trying to get at in my earlier post, though am unlikely to have worded it correctly.

Yes, the current gap between us and AI is huge, but if the nature of that gap is "we need to add more functions/change those that already exist", especially if we already possess the tools required to do so or the tools required to make those tools in the future, then an AI is basically a 1/100 or 1/1000 or 1/10,000 (or even more)-formed human. I'm not even sure what the implications of that would be, but it would mean we will have to think hard about how, where and at what rate we want to make those AIs more complex.

Indeed. I think you hit on an important distinction. The way I see it is that we cannot mimic the complexity of life. We have flesh and we breathe air. That's an important distinction that I want to make because it shows a whole organism at work rather than simply processing information. There then is the question of "what it's like" to be alive, which we also cannot mimic.
 

Black Rose

An unbreakable bond
Local time
Yesterday 6:29 PM
Joined
Apr 4, 2010
Messages
10,871
-->
Location
with mama
We have flesh and we breathe air.

This just increases the amount of what amounts to fluctuations to be done.

oxygen is variable and that means the flux of neural activity isn't constant at micro levels.

flux will increase memory by reflection, memory folds onto itself.

reflection is an algorithmic process, flux makes memory complexity into origami folds.

flux makes origami of memory, oxygen is a flux regulator.

reflection basically is regulation through folding
 

Old Things

I am unworthy of His grace
Local time
Yesterday 7:29 PM
Joined
Feb 24, 2021
Messages
1,567
-->
We have flesh and we breathe air.

This just increases the amount of what amounts to fluctuations to be done.

oxygen is variable and that means the flux of neural activity isn't constant at micro levels.

flux will increase memory by reflection, memory folds onto itself.

reflection is an algorithmic process, flux makes memory complexity into origami folds.

flux makes origami of memory, oxygen is a flux regulator.

reflection basically is regulation through folding

I don't think the brain is that temperamental. I think the brain and mind, like children, are resilient.
 
Top Bottom