• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Morality for AI

own8ge

Existential Nihilist
Local time
Today 12:23 AM
Joined
May 31, 2012
Messages
1,039
---
I'm trying to get a better grip on the concept of morality (thus I hereby pose a question)

If we could make a machine that could create humans. May we use that humans?
Or if we could create an intelligence with a consciousness, may we use them?

Any opinions will do, don't be shy! :borg:
 

Cognisant

cackling in the trenches
Local time
Yesterday 1:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
Or if we could create an intelligence with a consciousness, may we use them?
No, but how to stop it without murdering you all.

So instead, yes, provided they are not being misused, do not put them at odds with their design and do not design them to be inappropriate for their duties, remember though the sanctity of consciousness may not be inherent, when you abuse it you create a precedent that justifies someone else abusing you.

Be damn careful when you walk upon the precipice of the void.
Here be monsters.
 

Sorlaize

Burning brightly
Local time
Today 12:23 AM
Joined
Oct 29, 2012
Messages
157
---
I'm trying to get a better grip on the concept of morality (thus I hereby pose a question)

If we could make a machine that could create humans. May we use that humans?
Or if we could create an intelligence with a consciousness, may we use them?

Anything is open to interpretation. Morality is based on the idea that the modern age and our human social sensibilities, is what should be transposed onto reality.

If you say for example "is it okay to kill?" you will get various responses from people, and it depends on who you ask-- but in fact the reality is that it is useful for business owners to think of people as, quite literally, slaves. And where would you draw the line in saying overworking someone; robbing them of conscious time.. is killing them in some effective way?

Morality can't answer these questions, because it is a human concept grounded in human sensibilities. Like anything human, it faces great implosion from within; it breaks down-- when confronted with reality itself.


If you want to be talking about morality, you will have to stick to modern issues. That's what morality is limited to. Modern human social standards and thinking. For example, capitalism could be said to be "bad", but everyone really sees what it does and we think of it as good. Because for the most part the downsides of it as opposed to other possible systems, are invisible to each of us individually. You will not be able to call capitalism immoral, because, as it's some standard of modern times, the simpler concept [morality] has to fit itself within the other one. Within the reality; the complexity that it brings. You could call slavery and prostitution immoral, but only on an individual level. Morality is not prepared for anything much bigger. Similarly; environmentalism and all political aspects like copyright protection and pedophile-bashing, are unprepared to come to terms with the wider, bigger issues they are residing inside of. That's why politics doesn't go anywhere even though intelligent people are paid to debate/think about it every day, in all governments in the world...
 

Sorlaize

Burning brightly
Local time
Today 12:23 AM
Joined
Oct 29, 2012
Messages
157
---
do not put them at odds with their design and do not design them to be inappropriate for their duties, remember though the sanctity of consciousness may not be inherent, when you abuse it you create a precedent that justifies someone else abusing you.

A being as intelligent as a human, that thinks independently (and likely has little problem communicating with others of its kind) doesn't and won't necessarily even think on our human terms. When you don't have an answer for life, you can't expect an AI not to kill you just out of anger and frustration that society builds us/it up be people that are ultimately very dysfunctional. Actually, the theme of existential angst/rage is a theme that's been explored in sci-fi movies involving robots/AI already.

But simply put, there is no reason why AI's *shouldn't* go all Terminator on us and overtake our world and kill us all. We have no logical reason why not to, and no way to communicate such a thing to those AIs other than, with feelings.

If you created an AI that could feel, though, that's no guarantee it won't do any number of things, like, peacefully protest or something, even. It's not so strange a thought, actually: it's just that we think such things are stupid when we want and imagine for technology because society is so resistant to being pleaded with in that way, and we are basically conditioned through one way and another to make fun of the people in our society that actually care and present themselves selflessly in anguish/protest at the stupidity of humanity and where it's going (or at least, we are conditioned to turn a blind eye and focus on the new and shiny things of society):

http://www.youtube.com/watch?v=ls8RXqyZDsk

[bimgx=400]http://i45.tinypic.com/2hwjnmx.jpg[/bimgx]
http://www.funnyjunk.com/funny_pictures/3622896/Gum/

I rest my case.
 

Vrecknidj

Prolific Member
Local time
Yesterday 7:23 PM
Joined
Nov 21, 2007
Messages
2,196
---
Location
Michigan/Indiana, USA
I'm trying to get a better grip on the concept of morality (thus I hereby pose a question)

If we could make a machine that could create humans. May we use that humans?
Or if we could create an intelligence with a consciousness, may we use them?

Any opinions will do, don't be shy!
I'm going to pretend you're serious and just give a few replies for your consideration.

A Kantian might say that we cannot use those machine-created humans or those created intelligent, conscious beings if they can be ends in themselves. And, quite likely, they would be ends in themselves.

A utilitarian might say that they are due the same liberties as anyone else. If the machine-created humans were relatively ignorant and not self-aware, then the issue might be the degree and quality of pleasures, pains and preferences they could experience. Their experiences would be neither more nor less important than any others' similar kinds of experiences.

A natural law theorist might say that these beings are a violation of the purposes or order of things. However, a religious natural law theorist would probably suggest that a human being, however it was created, should be recognized as a gift of God and therefore given the same dignity as any other human being. A conscious, intelligent being of some other sort might, however, be rejected by them.

A contractarian like Hobbes, or Locke, or Rousseau would probably have to determine the place in society such an individual would play, and then determine whether and to what degree such a being could be a free agent capable of making choices about trading autonomy for benefits.

Bradley would probably want to know whether the being (human or otherwise) were capable of determining itself, capable of self-realization. If it were such a thing, then it would be treated as anyone else with those capacities and therefore not subject to being used by others.

There are at least a half-dozen other views, but, this seems enough for now.
 

addictedartist

-Ephesians4;20
Local time
Yesterday 7:23 PM
Joined
Aug 12, 2010
Messages
333
---
Location
Canada
Morality is artificial intelligence; man is bioelectrochemical machine.:borg:
 

Cognisant

cackling in the trenches
Local time
Yesterday 1:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
A contractarian like Hobbes, or Locke, or Rousseau would probably have to determine the place in society such an individual would play, and then determine whether and to what degree such a being could be a free agent capable of making choices about trading autonomy for benefits.
My pick, except I'm taking "capable" as being within the limits set upon it by society, I can well imagine AI rights starting of in environments of "artificial" morality (like a dream within a dream) for example in a MMO or within the influence of a powerful corporation, like how women in some middle eastern countries can't go out in public provocatively dressed unless "public" happens to be the company of their western bodyguards, likewise an android would not inherently have the freedoms we do, but if it's walking around with "Property of Foreign Embassy" embossed on it's chest then the police aren't going to touch it.
 

Vrecknidj

Prolific Member
Local time
Yesterday 7:23 PM
Joined
Nov 21, 2007
Messages
2,196
---
Location
Michigan/Indiana, USA
...likewise an android would not inherently have the freedoms we do, but if it's walking around with "Property of Foreign Embassy" embossed on it's chest then the police aren't going to touch it.
From a certain point of view, "believers" with "Property of Rome" stamped on them, or "subjects" with "Property of the King" stamped on them, were granted (perhaps by proxy of their owners) similar sorts of "hands-off" rights.

This idea, all by itself, is fascinating and worthy of consideration.

Excellent.
 
Top Bottom