• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Artificial Intelligence and Law

Da Blob

Banned
Local time
Yesterday 9:02 PM
Joined
Dec 19, 2008
Messages
5,926
---
Location
Oklahoma
There are any number of fascinating but obscure journals in this world. As opposed to the propaganda spewed by the MSM, these journals are the true source of news, IMO

Here's a link to one of them that a least a few forum members could find intriguing...

http://dl.acm.org/citation.cfm?id=1375958&picked=prox
 

Cognisant

cackling in the trenches
Local time
Yesterday 3:02 PM
Joined
Dec 12, 2009
Messages
11,393
---
I think that journal's mainly about the use of UAV drones, their increasingly autonomous use of lethal weaponry and who exactly is legally liable when people are killed, I mean if a UAV misfires on an allied unit is it the fault of the engineer who's duty is to maintain the weaponry & sensors, or the person who coded the "artificial intelligence", or the person who deployed the UAV in that area, or is it the fault of the unit on the ground for not having a reliable IFF system?

It's easy to end up in a situation where for example an inboard sensor has an electrical fault, the damaged sensor causes the software to mistakenly select an ally as a target, which is arguably the field commander's fault for having that unit in the UAV's field of fire (I mean you wouldn't have people walking in front of an automated turret would you? regardless whether it has IFF or not) so in the end nobody is held accountable because just like a soldier in the field the UAV was acting autonomously, so no one person that contributed to the accident can be held directly accountable, but lacking a will of it's own the UAV can't be held accountable either.

That legal ambiguity also makes autonomous systems the perfect tool for getting away with murder, if you can conspire a situation that will cause an autonomous system to malfunction then you can set up "accidents" that involve so many variables that actually proving that the murderer intended to commit murder becomes almost impossible.
 

Da Blob

Banned
Local time
Yesterday 9:02 PM
Joined
Dec 19, 2008
Messages
5,926
---
Location
Oklahoma
I don't know there seemed to be a wide range of topics (?)

I scrolled through the other issues just to see what was there. i kind of wished i would have kept up with A. I. jargon, because there really seemed to be some 'world changing' issues being discussed. The kind of material that affects everyone but very few will ever be aware of...
 

Polaris

Prolific Member
Local time
Yesterday 3:02 PM
Joined
Oct 13, 2009
Messages
2,259
---
That legal ambiguity also makes autonomous systems the perfect tool for getting away with murder, if you can conspire a situation that will cause an autonomous system to malfunction then you can set up "accidents" that involve so many variables that actually proving that the murderer intended to commit murder becomes almost impossible.

When it concerns warfare and weaponry the law would be such that the responsibility for any mishaps (conspired or accidental, either way is irrelevant) ultimately would have to lie with the unit that authorised the use of such an autonomous system. The risk of collateral damage isn't a two-way situation; the damaged (humans) are in no position to sign a document of consent, they simply happened to be there already. Perhaps operating staff would be required to sign some sort of consensual contracts.

However, regarding autonomous agents such as those used in e-commerce, the regulations may require a different approach, otherwise software developers or purchasers/companies may find themselves in potential serious legal strife.
 
Top Bottom