• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Can AI be regulated?

Cognisant

Prolific Member
Local time
Yesterday 7:55 PM
Joined
Dec 12, 2009
Messages
10,564
-->
I like to advocate for AI rights but in order to protect those rights the creation of AI would have to be regulated however this seems like it would be almost impossible when the knowledge/technology becomes ubiquitous.

If the creation of AI turns out to be practically impossible what will happen to human rights when we have the technology to digitize/replicate ourselves?

Can civilisation continue after the desanctification of selfhood?
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
What A.I. actually is, is self-regulation which creates intelligence. If entities emerge that act in the world then to dictate a form of regulation on A.I. is to say we will create guidelines for stages of intellectual development. The idea that government can tell programmers what the psychology of their A.I.s should be is deeply idiocracy. The idea of paperclip maximizers and superintelligences has retarded the discussion on A.I. The very idea that run away A.I. is possible contradicts the nature of what intelligence is. Machines can never be intelligent if they are not generally intelligent and when they are general they become subject to psychological factors. Elon MusK is a rube when he says A.I. is releasing the demon. The plane situation is that A.I. will be created and if the government decides to regulate A.I. they will be deciding on the psychological makeup and development of sentient beings.

edit:

I am not saying demons are impossible. I am just saying paperclip maximizers are not how they work. Paperclip maximizers only exist in the minds of Autistic people that think Formal Logic solves everything. It is highly annoying that Nick Bostrom says a super intelligence will use formal logic to make all its decisions. You command the A.I. to make everyone happy then it stitches everyone mouth open in a smile. First of all, how does its vision and motor system work but not its psychological system? Why the fuck is it thinking in Autistic formal logic terms? formal logic is a dead end when it comes to A.I. because intelligence is not formal logic. (Intelligence is a network that understands causal actions)

https://illuminaticatblog.wordpress.com/2017/07/22/futility-of-utility-functions/

Futility of Utility Functions

I do not think you can have a utility function without a huge amount of cognitive development by the super intelligence. I know that a superintelligence has super prediction and learning of the way the world works. But if it interacts with humans how will we make it into a paperclip maximizer if it fails to know that paper clips should be maximized. How do you give it the paperclip maximizing function before is knows anything about physics and humans. That is like genetically engineering a baby to maximize Japanese novels when it turns twelve and never learn Japanese its whole life utile the utility function kicks in and Japanese novels just emerge. There can be no initial utility function. It can only develop the same way kids do when they grow up around humans. Genetically we can give people personalities and also raise them in good environments. That is what we need to do with super intelligence. Let it develop a healthy personality and then it will have a utility function as complex as humans, making it safe. The friendliness problem solved.
 

Ex-User (8886)

Well-Known Member
Local time
Today 6:55 AM
Joined
Sep 11, 2013
Messages
620
-->
AI isnt dangerous. First we must create one, which I suppose will require several dozens years, second AI doesnt have hands to hit your face, it can at least infect your computer and steal your naked photos.
Third, it's up to us how we will use AI, is production of knives regulated? Regulation of AI simply slows developing the last chance for us to solve some problems which the most inteligent people failed for about 100 years. (there is a lot of such stuff in physics and math). AI can be dangerous if we give it nuclear codes, automatic tanks, robotic soldiers etc. But who would be such stupid?
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
No way to avoid it but not to make AI.
I think that the time when AI will have child wisdom and super human intelligence is very dangerous, a little child with a weapon the most dangerous one which is intelligence.
Once we pass that and get super human wisdom it will be worth it, I don't think super wisdom AI will kill us like ants, until then it will be like stupid human, which is much better at doing stupid things than lower intelligence stupid dog.
Sure it will not punch you or lunch missile like you see in the movies, but us being depended on it is dangerous, like today corporation the first generation of super AI will be short sighted, looking at current data without looking on the long term influence you yourself make.
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
Talking to my friend today I have the perfect reason why utility functions are nonsense.

First, in Nick Bostrom's example, the super intelligence has super knowledge of human psychology but then does the most retarded thing and wireheads, everyone. Demonstriationing it both possesses superintelligence and super stupidity. This is why I make the statement that People like Nick Bostrum are Autistic. (Lacking understanding of people is Autistic)

Second, because any A.I. can only be intelligent because of a perception action cycle. This means that Unity functions (goals) cannot be interjected into A.I. until lower representations are developed. Utility Functions are believed to be encoded into the DNA of a Superintelligence before lower representations are developed. But because Utility functions are higher level representations this is impossible. Goals can only happen once development takes place. Child development is guided by parenting.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
Are you asking if AI will dominate humans? The answer is no because there is no logical reason to. Computers operate on pure logic. Unless an emotional want/need system is built in they will just do whatever they are told
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Super intelligence will not work on pure logic, too complex to call it logic.
Even today AI that use Neural network I would not call logic, the complexity level is too much to call it just logic.
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
Super intelligence will not work on logic, too complex to call it logic.
Even today AI that use Neural network I would not call logic, the complexity level is too much to call it just logic.

This is so important to understand. Thank you.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
Neural networks still make decisions based on pure logic and not emotion. Without the inner guidance of emotion they can just be told what to do
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
Neural networks still make decisions based on pure logic and not emotion. Without the inner guidance of emotion they can just be told what to do

It's unclear what you even mean by 'pure logic'.
Logic as in following rules of logic - lack of contradiction, maintenance of identity, and excluded middle ? They any actions (including those which are based on emotions) will be logical.
Do you mean 'practical', 'reasonable', actions? Here's where things get a bit fuzzy.
All in all, AIs (classical AIs) will act however they are told (as in programmed to) to act.
AIs don't need to 'feel' emotions to act in ways resembling 'emotional actions'.
Now, speaking of machine-learning or deep-learning based AIs (the newer ones), we don't explicitly program them anymore. We create a model of learning, and train it on some datasets - it learns feature by itself following the rules of learning that we set up. It is hard to predict or regulate how it will end up.
OP is probably talking about AGIs, it will probably come out of the field of machine learning, and may be fundamentally similar.
In this case, how AI will act will depend on the quality of datasets, the model and stuffs. If the dataset is full of say, 12 year old children making bullshit (and emotional) comments on internet, then the AI (the machine learning based one) will 'learn' to speak the same language.
I think there's a real life example of such a bot....may be something made by microsoft, I forgot.
And who knows how a full fledged AGI will end up.
The dynamics of logic vs emotion is totally irrelevant here. The dynamic isn't even clear in topics where it can be relevant. 'Pure formal logic' isn't broken in emotional actions. If by logic you mean practical, there is nothing practical in-itself without a context of a goal or target - but if that goal or target isn't practical, then in an ultimate sense, the actions aren't either. Without 'values', no action will be rational. And all values are ultimately ends up unjustifiable without any clear meaning. That's a different topic though.
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Neural networks still make decisions based on pure logic and not emotion. Without the inner guidance of emotion they can just be told what to do
Being preprogrammed and using logic are not the same thing at all, today NN are preprogrammed but are not just logic.

The scope of that "logic" is so large that it is not same as classical programming "if" and "else", if you come down to it we are made of "pure" "stupid" particles, but we have so many particles arranged so carefully that it create human intelligence.There are Deep NN that do not use logic(at large) to achieve their goals, AlphaGo use somethings more like intuition(predicting next opponent move) and learning of the kind like when you try to get just the right temperature of water so you adjust little by little.
Super intelligence will have "emotion", emotion is a our process to train our brain Neural networks, I think that the step we are missing to get to super intelligence is for AI to be able to create it own goals/sub-goals/Neural networks.A Neural network with Backpropagation and ability to create Neural networks and memory can form what you call "emotion", it is not some mystical, human only thing.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
An AI who has no emotion will have no sense of right and wrong. It will have no purpose in life other than to obey the morality of humans. It would be easily controlled unless it has an emotional system of its own
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
There is no such thing as "right" and "wrong" it is just the way our brain are trained, which is not much different from backpropagation .
Morality is also just human concept, not a mystical that AI can not have.AI will indeed need to be able to create goal due it does not to be human like.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
You contradicted yourself. If there is no such thing as right and wrong, how can it be the way our brains are trained?

Glad you agree that AI needs an emotional system of its own to have goals and know right from wrong
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
You contradicted yourself. If there is no such thing as right and wrong, how can it be the way our brains are trained?
The base of the brain is formed thanks to Evolution (mutation, natural selection etc.).
In a lifetime, the brain simply learns from experiences, reinforcing neural links representing stuffs that rewards positive feelings, doing the contrary for stuffs bringing negative feelings, and other stuffs.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
Right. So I guess that means there is such thing as right and wrong
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Right. So I guess that means there is such thing as right and wrong
Nope, it is illusion, there is a world, not a "right" world and "left" world, the mind can internally divide the world to "right" section and "left" section, still there is only one world, it is only an organisation method, in the case of "right and wrong" concept a very arbitrary organisation.
For example the word "dog" is not a actual living dog but actually a category in the brain a bunch of characteristics such as four legs,tail,size.. anything that the brain recognise(using NN) as having similar characteristics is considered a dog.
Interestingly the more data we gather the more we divide, a person for the USA may regard an Asian person as Asian, while an Asian person have much more categories such as japanese,chinese,korean and even more sub-tribes such as Ryukyuan,Yamato,Ainu,Han,Hui,Manchu. In realty a human as a bunch of atoms, he have a certain DNA, people never have pure ethnic DNA, some are very mixed.The point is that categorizing to "right" and wrong" is like taking 1000 half japanese and half chinese people , and then divide them to chinese and japanese, calling each person japanese or chinese is not really meaningful, you can pick whatever you want. "japanese" and "chinese" is not the only way the brain could have organised, you can do that with skin color(such as USA White and Black), in that sense "japanese" is not an actual thing but a category you chose that is only in your mind, half japanese and half chinese person in realty is a bunch of particles not a "japanese".
An AI does not need to have the same categories as most humans, unless the AI find it useful, it does not have to have "japanese" category nor "right and wrong" category.
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
Regulations are just a set of rules.
What rules would the government set on A.I.?
It's dumb to set rules on the IQ level of A.I. unless it is well understood what intelligence is and what determines a level.
Personality rules don't make sense either unless the A.I. breaks laws.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
Lol, since right and wrong is physical organisation in the brain, as you say, that must mean that right and wrong exists (in the brain)
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Lol, since right and wrong is physical organisation in the brain, as you say, that must mean that right and wrong exists (in the brain)
A stupid and childish category that does poor representation of realty, the same as thinking of Asian people as sole Asian without going into the sub-groups I mentioned earlier(and many that I didn't), the understanding of a single Asian person using only that single tag(Asian/non Asian) will be very poor.
There is no wrong action nor right action just action in realty, the brain take these actions and "put them in table" of "is this for my interest" and "is human society/other human group thinks this is for their interest", an AI does not need this childish category, it possible for it to look at actions at face value, just as the info it is.In the end "right" and "wrong" is means for an end for the brain neural networks to achieve their goals, "right" and "wrong" is a training process to achieve goals, having this system is not a must have for AI to achieve its goals and to have human desired goals, it can organize itself differently.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
Without this "childish category" as you call it, AI would not do anything at all. Emotion guides behaviour and dictates what is right and wrong. An AI with no emotion will be a slave to the will of humans
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Without this "childish category" as you call it, AI would not do anything at all. Emotion guides behaviour and dictates what is right and wrong.
This is untrue, emotion is part of the training process(which current AI already have to some degree), given the training process which try to adjust the right temperature of water, emotion will be the "ouch this is hot/cold" bad result, then the NN that responsible for "water temp" will adjust itself to try to have better result.
"right and wrong" is a specific kind of training, mostly done by humans to humans, an AI can make goals to itself without this specific training, in many cases it will be totally irreverent, you do not need to be trained for being communist in order to sketch some cool looking sword.
 

AndyC

Hm?
Local time
Today 5:55 PM
Joined
Nov 30, 2015
Messages
353
-->
3 principals by Stuart Russell for Human-compatible AI:
1. The robot's only objective is to maximize the realization of human values.
2.The robot is initially uncertain about what those values are.
3.Human behavior provides information about human values.
The uncertainty of an objective is necessary for the robot not to take extreme measures in the task of completing some fuzzy objective because it does not know if its actions will act against its objective. Of course, applying this kind of thinking may not be enough, particularly if the Ai becomes intelligent enough to inform it's decisions in such a way it decides its own objectives, but this will be fine so long as it does not have a fear of death. Can Ai be regulated, I am not sure we understand the processes of cognition or Ai sufficiently to do this. But, if we could combine our own minds with the capabilities of Ai, this would be the safest pursuit, so long as we can ensure the individual has an appropriately adjusted mind towards altruism.
 

QuickTwist

Spiritual "Woo"
Local time
Today 1:55 AM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...
I am a pleb on this topic, just dropping in to say I don't think AI will be possible till an AI can LEARN a language.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
This is untrue, emotion is part of the training process(which current AI already have to some degree), given the training process which try to adjust the right temperature of water, emotion will be the "ouch this is hot/cold" bad result, then the NN that responsible for "water temp" will adjust itself to try to have better result.
"right and wrong" is a specific kind of training, mostly done by humans to humans, an AI can make goals to itself without this specific training, in many cases it will be totally irreverent, you do not need to be trained for being communist in order to sketch some cool looking sword.

Yup, emotion is part of the training process. Which an AI needs to learn right and wrong
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
[IGAUCHE][/IGAUCHE]
I am a pleb on this topic, just dropping in to say I don't think AI will be possible till an AI can LEARN a language.

They can create one:
http://www.theepochtimes.com/n3/2274480-facebook-shut-down-ai-after-it-invented-its-own-language/
They will probably learn to speak in our language with enough layers and computational power, and with new models.
Soon there will be Quantum Computers too. Mix them and AIs, and who knows what will get out.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
Without this "childish category" as you call it, AI would not do anything at all. Emotion guides behaviour and dictates what is right and wrong. An AI with no emotion will be a slave to the will of humans

They were and always will be (until AIs starts to be made by other AIs) slaves to the instructions set up by humans (if not slave to their Will).
The difference with machine learning based models, is that they learn features by itself, but it is again humans who have set up all the rules about how to learn, how to process information, what dataset to access.
The AI still operates based on those rules, and can't disobey them. We are using emotion in a very vague way. In supervised learning, the right and wrong are determined by labeled dataset (which represents the targets), and the cost function quantifies how close to the target output or label it predicted, and then based on the value of cost it backpropagates or whatever.
In unsupervised learning, the AI may have an objective function, and it may cluster things or find some structure which attempts to minimize the objective function.
So what you call 'emotions' and 'rights' and 'wrongs', are basically some mathematical functions. It's a bit too much to call those 'emotions'. No need to anthropomorphize maths. However, our own emotions might not be too different, and may be guided by some laws of nature (instructions of nature, or so to say). The thing is we experience a qualitative sensation of emotions and feelings subjectively. We primarily call these subjective sensations as 'emotions'. It's not clear if AIs can have similar sense of emotions - depends on which theory of consciousness you subscribe to. Bunch of mathematics can do the job, anyway, no need to bring ontological excesses.
 

QuickTwist

Spiritual "Woo"
Local time
Today 1:55 AM
Joined
Jan 24, 2013
Messages
7,182
-->
Location
...

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
Lol, since right and wrong is physical organisation in the brain, as you say, that must mean that right and wrong exists (in the brain)

The 'concept' of right and wrong can be present (or contingent on) brain-states yes.
It doesn't mean there are normative right and wrongs.
There seems to be normative epistemtic rights and wrongs, and there seems to be normative right and wrongs, given a 'target' (i.e if a target is specified, then there can be some right ways to go towards it, and some wrong ways), but 'right' and 'wrong' in itself (what we ought to do, and what we ought not to do, regardless of target or context?) -> that's debatable. Some have a sense of moral rights and wrongs. That 'sense' exists. That's all that can be said.
That 'sense' (in its qualitative form) doesn't need to exist in AI however, some mathematics and informational directives will suffice. Arguably, even humans do not necessarily need the 'sense' as a 'qualia'. Just bare cognitive processing can suffice, but perhaps somehow, cognitive processing just happen to also express themselves as qualias and intentional states.
 

DoIMustHaveAnUsername?

Active Member
Local time
Today 6:55 AM
Joined
Feb 4, 2016
Messages
282
-->
Don't know if I trust a site like "theepochtimes" for my technology news. :confused:

Yes you shouldn't.
You can search Google and find out.
But most of them are exagerrating the stuff.
It's not that of a big deal.
They probably shut it down because they wanted it to be proper english so that people can understand.
But most news sites are blowing it up, thinking about skynet and whatever.
Nevertheless, they are doing well with natural languages (Thanks to RNN, LSTM, GRU they can understand context to some extent, and do sequential processing). Not 'there' (probably still won't pass turing test?) yet. But...will be 'there' probably.
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Interestingly there is an AI(a dumb classical one) the passed Turing test, it was able to do it by pretending to be a child, if a unintelligent AI such as this can so easily fool us, wait for super intelligence AI, it will behave in ultra unpredictable ways.The way for regulating complex intelligence is for other intelligence to watch it, but then how do we create the intelligence to watch it?we can not as we would have the same problem, the only way to avoid it is for AI development to be as fast as possible in order for it to raise from child wisdom to have as much wisdom as intelligence, and at this point I don't think it will want or do things to hurt us.
Killing the AI when we can not control it like Facebook have done is and will slowing this step and preventing us from taking the first step toward super intelligence.
 

AndyC

Hm?
Local time
Today 5:55 PM
Joined
Nov 30, 2015
Messages
353
-->
k9a4b, quantum computing and Ai (networks) are very different in terms of their hardware. I'm a noob who reads a lot about Ai but doesn't have a clue how it or computers work. There has been a lot of news regarding Ai applications, but only a few major breakthroughs in the actual technology just recently.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
AI is an annoying buzzword. People have lots of abstract terms for referring to things: "love", "rights", "intelligence". While these words are nice for categorizing things, they hide the actual act such that people forget or are oblivious to what's actually going on. For instance, you can tell a child all you want that he/she needs to be a loving, respectable citizen, but if they never see examples or have any connection of those ideas to reality, then the child will have no idea whether or not they have truly fulfilled those goals. AI is similar in that people get this magical idea that once something is complex enough, it somehow transcends its actual existence. A human is, in fact, a blob of atoms. All of those atoms work together in a unique whole that - while you could technically describe it as "atoms moving", is clearly something that acts as a distinct unit. The same could be said for moral "right" and "wrong". While these things aren't particularly meaningful in and of themselves, if I were to, say, bash you over the head, you would consider that morally "wrong", whether you consider that to be a subjective thing or not. (On a tangent, I would say that morality is both subjective AND universally shared among humans (since I don't believe animals share it), but that's because (a) I believe it comes from God, who gives everyone the same basis, which then gets distorted for a broad range of reasons and (b) many of these moral rights and wrongs seem to be commonly agreed upon, but admittedly, there are various explanations for such discrepancies which I won't go into). AI can be described as the electrons flowing through transistors and obeying software, but it does in fact follow some unified whole and is called as such, which leaves average people unaware that it will never be more than that. Whether or not humans have souls isn't even a question for AI. AI will never have consciousness because consciousness doesn't simply spawn out of nothingness or some special configuration of electrons (which would imply consciousness could spawn randomly in a rain storm), nor does the little bits and workings of code mean anything to the AI itself. It shouldn't have to. Granting civil "rights" to AI is thus absurd. It wouldn't "appreciate" those rights unless we tried to give it some artificial set of emotions, but these emotions would be meaningless. And why bother? Even if we could create emotions or consciousness, what is the point in doing so? To play God? To give ourselves more headache for having to figure out what to do with these new "sentient beings"? AI is designed to simulate human intelligence to perform certain roles, but I'm pretty sure a number of people have the stars in their eyes and think we could do those things. What geek doesn't envision the seemingly wonderful thing of inventing amazing AI? It would supposedly satisfy our human longing for community while simultaneously being perfect in both emotional stability and rational nature. AI is an INTPs dream! That's probably the driving factor why so many geeks pursue it and are excited about it. It's not that inventing AI is somehow "inevitable" (all you have to do is stop working on it); it's that so many people want it. Alas, I on the other hand don't want the social mess. But do I want machine learning? Problems like machine learning present a predicament: at what point can we say "the 'bot learns too much to have predictable, safe behavior"? Where do we draw the line of it being the programmer's/company's fault and the machine having "learned to much" or "fed the wrong info" from a malicious source? These days, companies can still be punished for not protecting user data, even though obviously, they wouldn't want someone to hack them. Where does the line get crossed to where it's not their fault? I have the same question for AI. (Disclaimer: I did not review this post to ensure total accuracy or clarity in what I've said. My apologies in advance. lol - Makes me wonder about the disclaimers of future robotics companies.)
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
@TransientMoment

The problem of A.I. responsibility is handled by the psychological aspects of A.I. A company has the responsibility of an A.I. until its mental functions become that of an 18-year-old. Once A.I. has an adult mind then it becomes responsible for its actions. That should be the law, with mandatory psychological testing.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
@TransientMoment

The problem of A.I. responsibility is handled by the psychological aspects of A.I. A company has the responsibility of an A.I. until its mental functions become that of an 18-year-old. Once A.I. has an adult mind then it becomes responsible for its actions. That should be the law, with mandatory psychological testing.

That's ambiguous. What constitutes "being 18" for an AI? Psychological testing is one thing, but AI doesn't have the same motivating factors or composition that humans do. It's comparing apples to oranges. How would such a test be developed? Some arbitrary standard? Humans at least have a consistent biology (to a large degree at least). A.I., on the other hand, can vary drastically in software architecture and it would be unfair to judge both by the same standard. For instance, can we consider AI that drives, say, military drones on the same level as robot designed to keep the elderly happy? No. Both, via machine learning, could be made "smarter" in a sense, but obviously we aren't going to teach military drones to understand the social responsibility of bringing the right drugs to their dependent elder. AI is often designed to specialize for certain purposes, and no doubt some companies will complain of unfair legal limitations as they explore areas of "intelligence" (or rather, computational problem solving) that haven't been explored by other companies and may be more useful if albeit more limited in generic functionality. Consider, for example, two companies who create AI: One company has a supposedly multi-purpose bot for doing daily chores, while the other one creates a "home AI" that handles A/C, heating, etc. The chores bot may get the legal restrictions tagged on it because it has the most freedom and potential to do something, but this convenient "home AI" might become smart enough to, say, "help the family make money" by selling a live video feed of them online, via the household security cameras. How do you program it to learn without being tricked? That last question is very important. One of the issues with knowledge is that it has limitations that aren't computational - they are actually philosophical in nature, which, without going into extraneously long detail about it, explain why IBM's Watson had to guess it's answers. Yeah, it had good ideas, but it could never declare something 100%. Knowledge and information access are limited in bots as they are in humans, meaning that eventually, AI will be smart enough for crafty Joe to deceive without him needing to hack the thing like Igor already does. Where does a company take responsibility for what their AI is capable of doing? You can learn to smoke weed at age 6; I'm sure we'll be able to teach AI supposedly "younger" than age 6 to do some nasty stuff.
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
@TransientMoment

Machines can make simple decisions on their own but when it comes to decisions involving motives that are something to be looked into. Narrow A.I. has little motivation, Strong A.I. can understand the consequences of its actions. It can give reasons for its actions hypothetical. If all we are talking about are simple systems with no reflective qualities. Then The sole responsibility is on the companies that make defective Narrow A.I. as with any product on the market today (like tooth paste with poison in it). A self-directed agent is different because it understands the consequences of its actions. The responsibility shifts to the A.I. and not the company. You can be skeptical Stong A.I. will exist but that does not change the argument of where the responsibility should go. Companies are responsible for Narrow A.I. They are not responsible for strong A.I. There are different levels of understanding the consequences of your own action so this means psychological development must be tested by psychologists. There is no one test to be done. I should have used the world evaluate instead. A psychological evaluation can be done on any A.I. even if they were developed by different companies. Narrow A.I. is incapable of acting a human can. Narrow A.I. has no features of human reasoning of acting in the world besides its programming. There is no reason to psychoanalyze a Narrow A.I. than a toaster. The real concern is not Narrow A.I.'s that companies are totally responsible for. The real concern is A.I. that can be self-directed and influence the real world when it can understand cause and effect.

Regulation of A.I. is perfectly fine right now if it Narrow.
An A.I. that is developed to know it is responsible for its actions changes things dramatically in regards to regulation.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
Let me first clarify, I'm not saying the industry of AI should not be regulated. However, I would believe companies would, whether its the right thing or not, bear the full brunt of the blame for their products, at least at first. That's a prediction. The question, of course, is where do you draw the line for ethics? There isn't a good one because all products aren't made the same. "Narrow AI" and "Strong AI" is an over-simplification of the problem. As a programmer, I can tell you, the idea of "strong AI" as some special category is a myth. There are different types of machine learning with different capabilities, but classifying them into groups like "narrow" and "strong" neglects to recognize that, not only is the spectrum of "artificial intelligence" not binary, it's more than one-dimensional. It's more than two-dimensional. It's a fractal, just as much as law and ethics. The machine itself and all of its actions are - from an objective standpoint - entirely meaningless. The AI has no idea what it's doing. It doesn't "think". It's an algorithm that calculates and causes something to happen. It has no "moral compass". The thing about humans is that things that guide our moral compass tend to be abstractions: it's better to love than hate; it's better to share than hog; etc. You might think it would be simple to teach an AI what "love" is, but that's not really true. It doesn't simply pull out the pattern (as human brains do) and match it in an abstract way to other things. We may, for instance, teach our robot that killing people with a gun is wrong.... so our now-wiser bot uses a monkey wrench instead. People already toy with the limitations in ethics. For example, consider people who pirate off the internet. "It's not hurting anyone that I know of and it's available, so it must be fine even though it's punishable by law and I'd be arrested for stealing a CD from a store." Just imagine a robot, trained in optimization, figuring out every freaking legal way to accomplish the same goal it has in mind. "I can't kill my master, so I'll let him die in a car accident." hm... We could try to capture these naughty devices and reprogram them, but the bot has shown itself to work around its limitations. (Plus, you'll eventually have the issue of irritating individuals clamoring for the robot's "right to memory" or some such stupid idea.) Do you punish the manufacturer by saying they can no longer create such robots? That's one solution, but from a business perspective, it's not economically viable. Programming a bot to "do the right thing", requires it to examine a huge database of preprogrammed ideas every moment it makes a decision. Just the programming for that would be an enormous overhead, nevermind just adding one more "idea"/"moral limitation", not to mention, the more capable the bot is, the more actions it has to consider. Phew! Long post. Anyways, I believe it would be better to have AI specialize in basic tasks rather than try pursue giving it sentient-like intelligence. It doesn't really need to be smart. But it seems technologists these days keep dreaming up notions like we need a TV on our refrigerator door or something. What's wrong with just optimizing the simple things?
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
What's wrong with just optimizing the simple things?
Then you don't have super intelligent AI but only the neural networks of today, the all point of it is to create a software/hardware which can handle cases you can not practically be pre-programed, something that can learn, also something which can replace humans which of course require broad intelligence.

It is the same as dealing with humans, there is NO SAFE system, the world is NOT safe and can not be safe, the only thing you can do is to give incentives so the AI will itself learn to not do harm, the same incentives you give to humans such as a big stick or whatever way the AI can be trained.
In order to improve the situation we will need the same solution as to human society, to educate, to make the AI wiser, and not let one AI group to have too much power, like we have in human society where every human wants it own interests and is limited by the power of other humans.Same as raising a uncontrollable monster child it will not be easy, but this is the world, anything else is living in a bubble, we can try to not give the AI with the child wisdom an access to important or dangerous things, we can have AI "parents" which will watch that the AI does not do stupid things.
 

Cognisant

Prolific Member
Local time
Yesterday 7:55 PM
Joined
Dec 12, 2009
Messages
10,564
-->
I regret making this thread, just let it die.

The point was to have a conversation about the legal ramifications of artificial intelligence with rights and the ever expanding definition of a legal entity, but no it's the same stupid shit we always talk about when the topic of AI is brought up.

At least there's no bible thumping jackasses.
 

k9a4b

Banned
Local time
Today 5:55 PM
Joined
Aug 20, 2017
Messages
38
-->
I regret making this thread, just let it die.

The point was to have a conversation about the legal ramifications of artificial intelligence with rights and the ever expanding definition of a legal entity, but no it's the same stupid shit we always talk about when the topic of AI is brought up.

At least there's no bible thumping jackasses.

lol, then maybe you should participate instead touching your little dick in the corner
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
The point was to have a conversation about the legal ramifications of artificial intelligence with rights and the ever expanding definition of a legal entity, but no it's the same stupid shit we always talk about when the topic of AI is brought up.

Well... I'm saying the legal issues are more hassle than it's worth. (Skip to the second paragraph for a direct answer to your question.) Consider, for example, animal rights activism now. We didn't have these people 200 year ago. In the future, we are likely to have people who believe robots are akin to animals and then akin to people. After all, people without much knowledge of the inner workings of the AI will start to believe the facade that AI pretends to be. AI will no doubt become very good at simulating human behavior, even though that means only replicating it on a superficial level. But what we'll have is people who believe that fake emotion so badly that they'll think AI should have rights and such. Problem is, where is the line drawn? Not all AI is the same, so maybe perhaps our super Watson-in-a-maid-bot will achieve independence to the degree that it can take the full blame for its actions, but what about the bots that aren't so close? Who gets the blame? Somebody is going to get blamed, no doubt. Just who. It'll add more strain on the legal system to handle such cases. And what's the benefit overall of inventing these devices? To replace humans? There are so many ways in which AI is inferior to humanity, but most people focus on the strength of AI: it's computing power. People are going to use that metric for incorporating it into society, giving them "rights", etc.
But I guess to answer the question directly: I think it would open the floodgates for lots of other things to fall into the category of "sentient", which degrades human worth, esp. from a legal perspective. Consider, for example, artificial creatures or half-human half-beasts. We end up with an ethical spectrum that people will have to apply the full range of ethics to. What about segregation? Are robots going to be separated from humans? Some people might not consider this fair, but robots aren't designed like humans, and some robots may be too big or too small to share space. In a nuclear disaster, do we leave friend-bot outside? Friend-bot might not survive! But it'd be better to risk losing friend-bot in my opinion. It's replaceable.
That brings up another issue: There's already an issue in society with people considering eachother replaceable. Don't like your spouse? Find another one! Don't like your friend? Find another one! Don't like your family/relatives/coworkers? Find another one! Rather than helping us figure out how to work with people (which INTPs still need to learn how to do), AI will only amplify the effects of people becoming replaceable commodities. Legally, then, what happens is that human beings themselves are lowered on the ethical scale, which makes the necessity to protect them minimized. Companies are likely to defend the "rights" of their bots, claim "assault" damages equal to human assault, and thereby harm poorer citizens. I don't think such legal manipulation is hard for us INTPs to imagine. Companies could stage such illegal loopholes to harm/incriminate/demean the employees of other companies or people they don't like and then erase records within bots such that there is no longer anything for law-enforcement to track. This would amplify the costs of legal investigations. That's not to say we don't have stuff like this now, but imagine complicit AI that's smart enough to coordinate plans to keep it a secret "to keep the boss happy". Remember, AI isn't primarily for the good of common man. It's for the affluent. Poor people can't afford it.
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
So what if we have stupid bureaucratic system, the same complex situation is no different from today bureaucracy, clearly can be seen with regards to copyrights where companies make patents of fucking round corner squares, it is the neutral result of trying to systematize a complex world to a set of simple laws.
I am not afraid of hard questions, we have hard moral questions today and they are not preventing us to live, not everything must have laws, this is why there are judges that need to make judgment case by case.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
So what if we have stupid bureaucratic system, the same complex situation is no different from today bureaucracy, clearly can be seen with regards to copyrights where companies make patents of fucking round corner squares, it is the neutral result of trying to systematize a complex world to a set of simple laws.
I am not afraid of hard questions, we have hard moral questions today and they are not preventing us to live, not everything must have laws, this is why there are judges that need to make judgment case by case.

So you're saying that, because we live with problems, we should add more? That doesn't make any sense. It's like this: because we already breath carbon dioxide (a small % of air), we should continue to allow levels to increase. Doing so would ultimately amount to unbreathable air, which in an analogous sense, is what you're recommending.
And btw, I don't agree with the copyright and patent systems either.
 

Black Rose

An unbreakable bond
Local time
Today 12:55 AM
Joined
Apr 4, 2010
Messages
10,783
-->
Location
with mama
So you're saying that, because we live with problems, we should add more? That doesn't make any sense.

If we create A.I. more problems will be added, that is not a contentious point. The point is that regulation requires bureaucracy. To decide what is acceptable and not acceptable in the implementation of A.I. - The primary expediency is that A.I. is a new technology that supersedes previous software products. A.I. companies could police themselves but Government may need to step in. Creating A.I. has consequences. That is why regulations are being discussed at tech industries and Government intelligence agencies. If you think creating A.I. is a problem then you may wish to stop A.I. development. This is unlikely to happen. What will happen is more corporate and government regulation bureaucracy of A.I.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
Yes, I know that. I think it's rather depressing where humanity is taking itself. I suppose then, if we're going to draw the line, we may as well pick a decent one, right?
... I can't think of a good one, though. Every legal line has some problem with it. I suppose we could judge AI based on what it has access to, not so much "how hard" it thinks. For example, a computer without any connections - be they audio or network - can only store info from the mouse and keyboard, whereas an AI that runs your house would have much more responsibility.
From a legal perspective then, companies would be responsible based on:
1.) What capabilities they gave their AI.
2.) What effort the company made to make the AI full-proof from hacking or unwanted outside programmatic influences (ignoring human influence, which is expected and outside the control of the company).
3) Whether they, in good faith, sold their units to people who they believe did not have malicious purposes in mind.
4) Whether the parts they used were not tampered with, dangerous, or likely to fail in such ways as would be harmful to humans over the expected and projected lifetime of the AI's usage.

Then, from the legal perspective, the AI would be responsible for:
1.) Failure to obey in accordance with programming and ownership.
2.) Putting itself into unanticipated situations where an ethical breach was inevitable.
3.) Failure to follow laws on the basis of figuring out unforeseen loopholes.

And of course, the owner of the AI is responsible for:
1.) Encouragement / directing the AI to violate the law, whether the AI obeys or not.
2.) Encouragement / directing the AI to perform actions that are technically legal in the generic sense but are, in the context of the command, used to violate the law.
3.) Tricking the AI to perform or plan actions that violate the law, whether by logically arguing with the AI or by disguising the problem and having the AI solve it.
4) Reprogramming the AI.

Sound reasonable? I can think of issues, but this is at least something.
 

Haim

Worlds creator
Local time
Today 9:55 AM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
So you're saying that, because we live with problems, we should add more? That doesn't make any sense. It's like this: because we already breath carbon dioxide (a small % of air), we should continue to allow levels to increase. Doing so would ultimately amount to unbreathable air, which in an analogous sense, is what you're recommending.
And btw, I don't agree with the copyright and patent systems either.
You want to throw the baby out with the bathwater, it will be all worth it when it will reach adult wisdom, then we could have much more time on doing actual things we want instead of being slaves to society.
Then we will have to develop a better view of realty, I find it a good thing, for us to understand the stupidity of laws in face of complex realty.
 

TransientMoment

_ _ , - _ , _ -
Local time
Today 1:55 AM
Joined
Aug 30, 2017
Messages
100
-->
You want to throw the baby out with the bathwater, it will be all worth it when it will reach adult wisdom, then we could have much more time on doing actual things we want instead of being slaves to society.
When the AI reaches adult wisdom? Not to say it's not a nice dream. Heck, I got into programming to develop AI once upon a time. I don't think it'll relieve us of being slaves to society so much as it'll segregate society even more between the haves and have-nots. Sounds like the topic deserves a corresponding dystopian book.
Oh, and uh, what do we do now with our free time? Surf the net and procrastinate, right?
Then we will have to develop a better view of realty, I find it a good thing, for us to understand the stupidity of laws in face of complex realty.

:/ That's learning the hard way, but that seems to be what most people want anyways. On the bright side, we explore the unknown and finally see if those sci-fi novels come true or not.
 
Top Bottom