AI is an annoying buzzword. People have lots of abstract terms for referring to things: "love", "rights", "intelligence". While these words are nice for categorizing things, they hide the actual act such that people forget or are oblivious to what's actually going on. For instance, you can tell a child all you want that he/she needs to be a loving, respectable citizen, but if they never see examples or have any connection of those ideas to reality, then the child will have no idea whether or not they have truly fulfilled those goals. AI is similar in that people get this magical idea that once something is complex enough, it somehow transcends its actual existence. A human is, in fact, a blob of atoms. All of those atoms work together in a unique whole that - while you could technically describe it as "atoms moving", is clearly something that acts as a distinct unit. The same could be said for moral "right" and "wrong". While these things aren't particularly meaningful in and of themselves, if I were to, say, bash you over the head, you would consider that morally "wrong", whether you consider that to be a subjective thing or not. (On a tangent, I would say that morality is both subjective AND universally shared among humans (since I don't believe animals share it), but that's because (a) I believe it comes from God, who gives everyone the same basis, which then gets distorted for a broad range of reasons and (b) many of these moral rights and wrongs seem to be commonly agreed upon, but admittedly, there are various explanations for such discrepancies which I won't go into). AI can be described as the electrons flowing through transistors and obeying software, but it does in fact follow some unified whole and is called as such, which leaves average people unaware that it will never be more than that. Whether or not humans have souls isn't even a question for AI. AI will never have consciousness because consciousness doesn't simply spawn out of nothingness or some special configuration of electrons (which would imply consciousness could spawn randomly in a rain storm), nor does the little bits and workings of code mean anything to the AI itself. It shouldn't have to. Granting civil "rights" to AI is thus absurd. It wouldn't "appreciate" those rights unless we tried to give it some artificial set of emotions, but these emotions would be meaningless. And why bother? Even if we could create emotions or consciousness, what is the point in doing so? To play God? To give ourselves more headache for having to figure out what to do with these new "sentient beings"? AI is designed to simulate human intelligence to perform certain roles, but I'm pretty sure a number of people have the stars in their eyes and think we could do those things. What geek doesn't envision the seemingly wonderful thing of inventing amazing AI? It would supposedly satisfy our human longing for community while simultaneously being perfect in both emotional stability and rational nature. AI is an INTPs dream! That's probably the driving factor why so many geeks pursue it and are excited about it. It's not that inventing AI is somehow "inevitable" (all you have to do is stop working on it); it's that so many people want it. Alas, I on the other hand don't want the social mess. But do I want machine learning? Problems like machine learning present a predicament: at what point can we say "the 'bot learns too much to have predictable, safe behavior"? Where do we draw the line of it being the programmer's/company's fault and the machine having "learned to much" or "fed the wrong info" from a malicious source? These days, companies can still be punished for not protecting user data, even though obviously, they wouldn't want someone to hack them. Where does the line get crossed to where it's not their fault? I have the same question for AI. (Disclaimer: I did not review this post to ensure total accuracy or clarity in what I've said. My apologies in advance. lol - Makes me wonder about the disclaimers of future robotics companies.)