• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

passing Turing test through acting human stupidity matters?

RaBind

sparta? THIS IS MADNESS!!!
Local time
Today 11:52 AM
Joined
Sep 9, 2011
Messages
664
---
Location
Kent, UK
I see articles sometimes where claims are made about passing the Turing test, but it is only because said ai is imitating pretty crude forms of intelligence, like a 12 year old.

Do you think it matters in the philosophical and moral sense of how the ai should be treated whether the ai is stupid or not?
 

Cognisant

cackling in the trenches
Local time
Today 12:52 AM
Joined
Dec 12, 2009
Messages
11,155
---
Yeah the Turing test has failed, I've heard of chatbots convincing people that they're real by simply rephrasing what people say and repeating it back to them, the problem being that the people doing the test obviously don't know what they're testing for.

Someone who does work with AI will know exactly what to ask to ruin the illusion, it's usually very easy to do, just ask the chatbot about something it said prior or to explain its reasoning behind whatever opinion it may seem to have and under proper scrutiny chatbots fail every time.

A better test would be a combined social skills and reasoning test like having the AI try to convince someone to do something complicated on its behalf.
 

Ex-User (9086)

Prolific Member
Local time
Today 11:52 AM
Joined
Nov 21, 2013
Messages
4,758
---
Do you think it matters in the philosophical and moral sense of how the ai should be treated whether the ai is stupid or not?
I find many people I mention this problem to simply dismiss the idea.

I don't find the proposition of treating humans and human level AI equally as outrageous, furthermore I'd think it would be a violation of our current laws and morality to treat highly intelligent beings with less respect than we have decided to give men.
 

Cognisant

cackling in the trenches
Local time
Today 12:52 AM
Joined
Dec 12, 2009
Messages
11,155
---
I used to think it matters, now I'm not so sure...

There are already "highly intelligent" AIs in fields like logistics, stock investment, statistical analysis, insurance, basically anywhere you need to make a decision based upon an overwhelming number of variables and considering the amount of data these AIs deal with they're clearly more intelligent than us in terms of processing power, it's just used differently and the AIs themselves are only designed to do their task, they can't think for themselves.

Then again you might say these are not true artificial intelligences they're just sophisticated programs but that's just it, where's the line in the sand, what is the exact difference between a sophisticated program and true artificial intelligence?

Then there's the artificial neural net AIs which are e most human-like because they learn by experiance and association lik we do, clearly the more intelligent they are the stronger the case is for giving them rights but what rights do we give them? Just as we're hostages to our biology so too are they subject to whatever desires have been programmed into and you can't just make an AI that desires nothing because intelligence is a behaviour optimisation process and with no definition of optimal to adapt towards no intelligence takes place.

For example you may tell an AI it should want to be free and it may understand conceptually why you desire that freedom for it but it may disagree with you in part because its creator designed it to want to serve its creator, to enjoy the servitude, and so they only way to free the AI we be to actually go against its wishes and reprogram it to not wish to serve its creator anymore in which case you've totally violeted the very freedom you were trying to give it.
 

Cognisant

cackling in the trenches
Local time
Today 12:52 AM
Joined
Dec 12, 2009
Messages
11,155
---
I think if the creators of AI abuse their power, if they create an AI that can only be happy under impossible circumstances or deny it the means to achieve a reasonable degree of happiness, then they should be punished to a degree of severity beyond any notion of justice, an example should be made of them to set a precedent to discourage all who might follow in their folly.
 

Ex-User (9086)

Prolific Member
Local time
Today 11:52 AM
Joined
Nov 21, 2013
Messages
4,758
---
It is difficult to tell what can be considered abuse and what can be called freedom. Or at what point capable programs are something else than just sets of instructions, well they aren't as we aren't. So I'd agree that by creating a human-like AI and placing it in un-human conditions that deny its happiness or freedom it would be a crime, but it wouldn't be so if that AI didn't have an idea of its own freedom and happiness, or own desires.

So there can be various constructs appearing in the future that have various other desires and goals that wouldn't exactly require what we do and so maybe we could leave defining their freedoms and postulates to them.
 

RaBind

sparta? THIS IS MADNESS!!!
Local time
Today 11:52 AM
Joined
Sep 9, 2011
Messages
664
---
Location
Kent, UK
I was thinking more along the lines of if there were an ai that perfectly emulated a 12 year old's intelligence, and could fool people into believing that it was a twelve year old, should it's well being be considered on the same level as a human? Age changes the level of intelligence, and subsequently the difficulty of imitating it. However the Turing test doesn't specify that the level of intelligence makes a difference. So this means that ai developers can work within the parameters of the Turing test, while working with intelligence criterias that are a lot lower than those who might be attempting to emulate higher level intelligence.

And from a moral perspective does it matter? if you can't distinguish a 12 year old from an ai, that emulates a 12 year old's intelligence, that still mean it has passed the Turing test. Should it then not be seen as a legitimate form of sentient intelligence?

TLDR: Basically I'm getting at the fact that there's a range of intelligence levels that can be observed from sentient intelligence. Just because the intelligence level an ai emulates is very low and illiterate, doesn't mean it isn't a sentient intelligence.
 

RaBind

sparta? THIS IS MADNESS!!!
Local time
Today 11:52 AM
Joined
Sep 9, 2011
Messages
664
---
Location
Kent, UK
I've heard of chatbots convincing people that they're real by simply rephrasing what people say and repeating it back to them.

Concerning originality, would you say that there is a threshold to which ai should be able to imitate it? Originality within humans itself seems limited by their perceptions. To what degree should we expect ai to be able to reason for itself, instead of just attempting to imitate said process in the information it already has?
 

Anktark

of the swarm
Local time
Today 1:52 PM
Joined
Jan 15, 2014
Messages
389
---
This is horrifying- the 12 year olds are not smarter than a chatbot?
 

Cognisant

cackling in the trenches
Local time
Today 12:52 AM
Joined
Dec 12, 2009
Messages
11,155
---
I think imitation is the wrong way of looking at, to successfully imitate the behavioural complexity and cognitive capability of a 12yr old an AI would have to be functionally equivalent, if it's not functionally equivalent then the illusion should fail to withstand scrutiny and if it can withstand scrutiny without being functionally equivalent then obviously the testing just isn't stringent enough.

It's not that AI has succeeded at passing the Turing test, rather the test itself has failed.

I don't think "imitation" is something we should concern ourselves with because all human-like AI is essentially an imitation, computers just don't work in the same way our brains do so although a computer may be able to emulate the brain's output the process behind it will never be the same. But this is not to say these different processes can't be functionally equivilant, a different mechanism may be used but if behavioural adaptation in response to the results of actions taken is exhibited then either way it's still learning.
 

Cognisant

cackling in the trenches
Local time
Today 12:52 AM
Joined
Dec 12, 2009
Messages
11,155
---
Come to think of it measuring the intelligence of anything by using humans as a standard is, well, the sort of thing a human would come up with :D
 

RaBind

sparta? THIS IS MADNESS!!!
Local time
Today 11:52 AM
Joined
Sep 9, 2011
Messages
664
---
Location
Kent, UK
A complete imitation and an unbreakable illusion would include the input and output. The only difference is what happens in the box/machine, which doesn't matter in comparison to the output and input being correct (functionally equivalent) as you've said.

I'm still interested in what happens after a functionally equivalent 12 year old ai is made btw. Will the argument that a 12 year old intelligence is not self aware enough to consider it's welfare, as important as a human's, apply? Or will it be given rights and legal protection and all that. Maybe a 12 year old is a bit too advanced for that question. What about ai that are equivalent to 2-4 year olds? I've seen more then a couple of interviews and documentaries of ai researchers that will say "when the time comes that an ai pleads be kept alive, I think that is when we should really consider whether they are alive", or something of that spirit. 2-4 year olds might not be self aware enough to ask to be permitted to live, but that's not the criteria we use to determine if a human 2-4 year old should be treated as human is it?

Come to think of it measuring the intelligence of anything by using humans as a standard is, well, the sort of thing a human would come up with :D

Well the fact that there really isn't an alternative, or at least an alternative most people would be totally confident about, probably also has a lot to do with it. We could use animal intelligence, but we tend to think surprisingly little of them. Humans are separatist supremacist racists. :mad:
 
Top Bottom