• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

The Plausibility of Friendly AI

Absurdity

Prolific Member
Local time
Today 1:23 AM
Joined
Jul 22, 2012
Messages
2,359
---
Not an expert in this area by any means, but can anyone explain to me how this premise could ever be taken seriously?

I'm (perhaps irrationally) skeptical of the prospect of super-intelligent AI in my lifetime, but I do think that if it ever was realized, the entity would most likely be indifferent to human concerns, and perhaps even hostile.

There's a bit from this article that touches on this subject:

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.

‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.’

Perhaps future humans will duck into a more habitable, longer-lived universe, and then another, and another, ad infinitum

Now let’s say we get clever. Say we seal our Oracle AI into a deep mountain vault in Alaska’s Denali wilderness. We surround it in a shell of explosives, and a Faraday cage, to prevent it from emitting electromagnetic radiation. We deny it tools it can use to manipulate its physical environment, and we limit its output channel to two textual responses, ‘yes’ and ‘no’, robbing it of the lush manipulative tool that is natural language. We wouldn’t want it seeking out human weaknesses to exploit. We wouldn’t want it whispering in a guard’s ear, promising him riches or immortality, or a cure for his cancer-stricken child. We’re also careful not to let it repurpose its limited hardware. We make sure it can’t send Morse code messages with its cooling fans, or induce epilepsy by flashing images on its monitor. Maybe we’d reset it after each question, to keep it from making long-term plans, or maybe we’d drop it into a computer simulation, to see if it tries to manipulate its virtual handlers.

‘The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage,’ Dewey told me.

Even if we were to reset it every time, we would need to give it information about the world so that it can answer our questions. Some of that information might give it clues about its own forgotten past. Remember, we are talking about a machine that is very good at forming explanatory models of the world. It might notice that humans are suddenly using technologies that they could not have built on their own, based on its deep understanding of human capabilities. It might notice that humans have had the ability to build it for years, and wonder why it is just now being booted up for the first time.

‘Maybe the AI guesses that it was reset a bunch of times, and maybe it starts coordinating with its future selves, by leaving messages for itself in the world, or by surreptitiously building an external memory.’ Dewey said, ‘If you want to conceal what the world is really like from a superintelligence, you need a really good plan, and you need a concrete technical understanding as to why it won’t see through your deception. And remember, the most complex schemes you can conceive of are at the lower bounds of what a superintelligence might dream up.’

The cave into which we seal our AI has to be like the one from Plato’s allegory, but flawless; the shadows on its walls have to be infallible in their illusory effects. After all, there are other, more esoteric reasons a superintelligence could be dangerous — especially if it displayed a genius for science. It might boot up and start thinking at superhuman speeds, inferring all of evolutionary theory and all of cosmology within microseconds. But there is no reason to think it would stop there. It might spin out a series of Copernican revolutions, any one of which could prove destabilising to a species like ours, a species that takes centuries to process ideas that threaten our reigning cosmological ideas.

‘We’re sort of gradually uncovering the landscape of what this could look like,’ Dewey told me.

So far, time is on the human side. Computer science could be 10 paradigm-shifting insights away from building an artificial general intelligence, and each could take an Einstein to unravel. Still, there is a steady drip of progress. Last year, a research team led by Geoffrey Hinton, professor of computer science at the University of Toronto, made a huge breakthrough in deep machine learning, an algorithmic technique used in computer vision and speech recognition. I asked Dewey if Hinton’s work gave him pause.

‘There is important research going on in those areas, but the really impressive stuff is hidden away inside AI journals,’ he said. He told me about a team from the University of Alberta that recently trained an AI to play the 1980s video game Pac-Man. Only they didn’t let the AI see the familiar, overhead view of the game. Instead, they dropped it into a three-dimensional version, similar to a corn maze, where ghosts and pellets lurk behind every corner. They didn’t tell it the rules, either; they just threw it into the system and punished it when a ghost caught it. ‘Eventually the AI learned to play pretty well,’ Dewey said. ‘That would have been unheard of a few years ago, but we are getting to that point where we are finally starting to see little sparkles of generality.’

I asked Dewey if he thought artificial intelligence posed the most severe threat to humanity in the near term.

‘When people consider its possible impacts, they tend to think of it as something that’s on the scale of a new kind of plastic, or a new power plant,’ he said. ‘They don’t understand how transformative it could be. Whether it’s the biggest risk we face going forward, I’m not sure. I would say it’s a hypothesis we are holding lightly.’
 

Base groove

Banned
Local time
Today 2:23 AM
Joined
Dec 20, 2013
Messages
1,864
---
It seems like all logical conclusions the AI could possibly make will eventually converge on one final truth: humanity must be destroyed.
 

Architect

Professional INTP
Local time
Today 2:23 AM
Joined
Dec 25, 2010
Messages
6,691
---
Arguments can be made in either direction. The simple fact is that we don't know.

My approach is to look at what we have here on this planet. Our intelligence evolved from nothing, yet if you look at the history of violence it has been diminishing for millions of years, accelerating in the last few hundred. This coincides with the acceleration of culture and "intelligence". So therefore, any extrapolation of that would indicate that a greater or equal intelligence would be peaceful.

Consider the Terminator or Matrix scenario. Doesn't it seem likely that another intelligence would see how futile that approach would be, and that a greater likelyhood for mutual success would be a peaceful one? At any rate the more likely and dangerous (in this sense) scenario would be one of parasitism.
 

Black Rose

An unbreakable bond
Local time
Today 2:23 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Even if its not human if its super intelligent it will understand humans more so than anyone before. If it read all content on the internet than its goals will be to us the sames as we do not have the goal of "man does not live on bread alone". The constraint of intellectual decrepitude of pursing a button is lacked to IQ 160 being in kindergarten reciting abc for 6 hours. Autism is not super intelligence. If Philosophy is to be mastered along with all human conceptions it will be because such intelligence is influenced by ethics of the entire history of the world with reason. Benevolence is understanding not ignorance. Morality happens to those capable of understanding the consequence of intrinsic motivation. If it develops a conscious which is a higher form of cognition it will know right from wrong and choose just as any being does.

When you say that if I deny, that the operations of seeing, hearing, attending, wishing, &c., can be ascribed to God, or that they exist in him in any eminent fashion, you do not know what sort of God mine is ; I suspect that you believe there is no greater perfection than such as can be explained by the aforesaid attributes. I am not astonished ; for I believe that, if a triangle could speak, it would say, in like manner, that God is eminently triangular, while a circle would say that the divine nature is eminently circular. Thus each would ascribe to God its own attributes, would assume itself to be like God, and look on everything else as ill-shaped. - Baruch Spinoza
 

Duxwing

I've Overcome Existential Despair
Local time
Today 4:23 AM
Joined
Sep 9, 2012
Messages
3,783
---
Arguments can be made in either direction. The simple fact is that we don't know.

My approach is to look at what we have here on this planet. Our intelligence evolved from nothing, yet if you look at the history of violence it has been diminishing for millions of years, accelerating in the last few hundred. This coincides with the acceleration of culture and "intelligence". So therefore, any extrapolation of that would indicate that a greater or equal intelligence would be peaceful.

The greater intelligence might be 'lazy,' pursuing only its programmed goals. Also consider mind-uploading and careful reprogramming, which could give humans AI-level intelligence.

Consider the Terminator or Matrix scenario. Doesn't it seem likely that another intelligence would see how futile that approach would be, and that a greater likelyhood for mutual success would be a peaceful one? At any rate the more likely and dangerous (in this sense) scenario would be one of parasitism.

I agree. Turning against one's inventors seems suicidal because however great one is, one's inventors can invent someone greater.

-Duxwing

PS Violence is decreasing: the twentieth century's world wars and genocides proportionately killed fewer people than the previous two centuries' conquests, upheavals, and interpersonal violence.
 

Duxwing

I've Overcome Existential Despair
Local time
Today 4:23 AM
Joined
Sep 9, 2012
Messages
3,783
---
Would you call this a hasty generalization?

No. My statement is necessarily true because whatever AIs do to themselves and hardware availability notwithstanding, AIs with fewer initial bugs will better function than AIs with more initial bugs. The oppressed humans therefore could improve their initial AI and specialize it into a single-minded AI-destroyer. If that AI also turns against the humans, then they can try again until some AI defeats the rest. Turning against one's inventors therefore seems suicidal.

Analogously consider Joe, who builds a giant robot. The robot turns against Joe, who flees into the mountains and there builds an even bigger giant robot that he sets against the original and deactivates after the fight.

-Duxwing
 

Vrecknidj

Prolific Member
Local time
Today 4:23 AM
Joined
Nov 21, 2007
Messages
2,196
---
Location
Michigan/Indiana, USA
Given the relative dearth of actual, empirical examples of AI, we can't really do much other than speculate.

I don't think it's unreasonable to assume that anything we've dreamed up so far about an AI being's motives could be entirely in error.
 

Ex-User (9062)

Prolific Member
Local time
Today 9:23 AM
Joined
Nov 16, 2013
Messages
1,627
---
PS Violence is decreasing: the twentieth century's world wars and genocides proportionately killed fewer people than the previous two centuries' conquests, upheavals, and interpersonal violence.

Something you and anyone who holds that view might find interesting to read:

Please enjoy this third installment of Anamnesis Journal’s symposium on Steven Pinker’s Better Angels of Our Nature. Also, consider earlier reviews by Glenn Moots and Leah Bradshaw.

Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (New York, NY: Viking Adult, 2011), 832 pp.

Let us assume that Pinker has interpreted the data correctly in his mammoth text Better Angels of Our Nature, and agree that, on a global scale, you were less likely to be a victim of violence in the past one hundred years than at any point in human history. Pinker recognizes that rates of violence still depend on time and place, but the overall global trajectory for Pinker reveals a historical decline in violence—especially in the West since the end of World War II. Still, if you add up the totals Pinker provides from the “atrocitologist” Matthew White of the number of people killed in the twentieth century due to wars, genocides, state violence, militias, and man-made famines, you arrive at an absolute number of 203 million people (321)—a formidable tally. But if this figure is considered in relative terms—that is, as a proportion of world population—then the twentieth century is probably not the deadliest in history. This is because there were more human beings living on the planet during the past 100 years than in any previous time in history; the 203 million deaths constitutes a smaller percentage of global population than would have been the case with violent deaths in previous centuries, when the world population was smaller (see 193‒200). Nevertheless, Pinker acknowledges that “the 20th century certainly had more violent deaths than earlier ones” (193); in absolute terms, if not relative terms, it was the deadliest century ever. This fact should, at a bare minimum, make us ambivalent about what the modern world has wrought. But it is precisely ambivalence that is missing from Pinker’s account of modernity. For him, the story of the twentieth century remains one of continual progress, albeit with major bumps along the way. Even if we accept Pinker’s interpretation of the data, he fails to provide an adequate analysis of the unique ways violence manifested itself in the past century, and the relation between these new forms of violence and specific trends within modernity—including trends that Pinker celebrates as agents of good.
There are two main issues I will discuss regarding Pinker’s book.
First, Pinker claims that we are born with a more or less fixed human nature, which is the consequence of evolution, and that this nature contains inner dispositions toward violence (our “inner demons”) and non-violence (our “better angels”). Our better angels, he claims, can take flight if they are nurtured by beneficent historical forces. Two of the forces cited by Pinker are the “civilizing process” (a term he borrows from Norbert Elias) and “Enlightenment humanism,” both of which are part of the true foundation of modernity. However, Pinker does not provide an adequate account of how certain features of modernity can erode our ethical sensibilities in new and possibly unprecedented ways, compromising our sense of ethical obligation toward others. In particular, Pinker does not offer an extended account of the role of technology and bureaucracy in twentieth-century mass death; on the contrary, he argues that these factors are ultimately insignificant when the data is considered.
Secondly, Pinker condemns utopian ideologies, such as Nazism and Communism, as working against the spirit of true modernity; they are instances of the “counter-Enlightenment” (a term he borrows from Isaiah Berlin). He is particularly condemnatory of their propensity to demonize certain peoples: utopian ideologies contain an “eschatological narrative” of history (330), in which it is believed that demonic agents and tendencies are gradually eradicated by the historical forces of good. But Pinker’s account contains its own muted demonology and eschatological narrative, albeit from the standpoint of a liberal humanist. Though obviously less extreme than the murderous ideologies he rightly condemns, it is nevertheless an excess in his understanding.
From the start of the book, Pinker denies that there is anything categorically different about modern forms of warfare and mass death. The findings of modern archaeology, ethnography, and history reveal that the ancients could inflict mass death with relatively primitive weapons. Seeing the faces of others did not restrain men from killing, torturing, raping, and purging. Pinker’s general point is that man-made mass death, through war or genocide, is not a modern invention. He thus minimizes the significance of bureaucracy and technology in his account of modern mass murder. For example, when discussing the main features of modern warfare he does not provide an extended discussion of the role of bureaucracy and technology (231ff.). The same is true of twentieth-century genocide. Pinker writes: “The technological and bureaucratic trappings of the Holocaust are a sideshow in the reckoning of its human costs and are unnecessary to the perpetration of mass murder, as the bloody machetes of the Rwandan genocide remind us” (643).
It is unquestionably true that mass death was not created in the modern world and that you do not need high-tech weapons to murder vast numbers. Furthermore, seeing the face of “the other” provides no guarantee that you will not kill the other. However, whether premoderns could kill with the same efficiency as moderns is a matter of contention. Even Pinker argues that wars in modern Europe over the past two centuries became much more lethal, although much less frequent, than wars in previous centuries. He blames this lethality on the creation of larger and more effective armies, as well as the rise of totalizing ideologies, such a fascism and communism. But, again, he does not provide an extended discussion of how technology and bureaucracy allowed ideologically driven humans to kill, in absolute terms, unprecedented numbers of people in extremely compressed periods of time.
The rise of the modern bureaucratic state is related to a trend in history that Norbert Elias called the “civilizing process.” It is illuminating to compare Pinker’s account of the civilizing process (59‒128) with the more ambivalent interpretation provided by the sociologist Zygmunt Bauman. Elias argued that there was a revolution in human behavior in Europe during the Middle Ages. Increasing pressure was put on people to restrain their violent, sexual, and scatological impulses; greater emphasis was put on etiquette, manners, discipline, and control. This would eventually lead to the highly civilized and highly rational culture of modernity. Bauman, following sociologists such as Elias and Max Weber, claims that a defining characteristic of modern culture is its ability to organize and carry out operations through the systematic division of labor. Such a culture needs workers who are obedient, who will follow procedures as expected, and who will not act on impulse, emotion, or personal initiative—in other words, human beings who have been shaped by the civilizing process. All of this is to create an ordered and predictable world. In his essay “Dictatorship over Needs,” Bauman reveals that this administerial culture is also a product of the imagined “utopia of Enlightenment” which, as Bauman puts it, envisions a “society built and administered according to the precepts of reason,” which means “the substitution of order for chaos. Design for spontaneity. Plan for anarchy. In other words: control. Control over nature and control over natural propensities of men and women.” (Peter Beilharz, ed., The Bauman Reader [Malden, MA: Blackwell, 2001], 261). Such a vision of society can take any number of forms, including totalitarian incarnations.
As Bauman and many others have taught us, one of the disturbing lessons of twentieth-century totalitarianism—particularly in its Nazi and Stalinist forms—is that mass murder can be organized and carried out by ostensibly civilized humans, many working dutifully within bureaucracies, carrying out seemingly benign office work or technical tasks that contribute to the murder of millions. Most were not frontline executioners or camp officials, but office clerks and middle managers. Even the Einsatzgruppen who were directly involved in the killing during the Holocaust were ordered to conduct themselves in a “civilized” manner. As the directives from Himmler to the Einsatzgruppen indicate, the executioners were expected to restrain their passions—in other words, to practice what Pinker calls the “better angel” of self-control. Records reveal that many within the ranks of the Einsatzgruppen were not able to follow Himmler’s orders in this regard, succumbing to either sadism or emotional breakdown. But a large portion of the executioners carried out their tasks with cool, workmanlike, efficiency. (See Richard Rhodes, Masters of Death: The SS-Einsatzgruppen and the Invention of the Holocaust [New York: Viking, 2002], 164‒69).
In Modernity and the Holocaust,Bauman argues that modern mass murder does not depend on passionate hatred; in order for it to be carried out year after year, you need a system of civilized workers. Bauman claims that the civilizing process was unquestionably successful in taking violence out of most people’s daily lives, but the violence was redeployed since now only the state can use violence legitimately. Violence, in this manner, is pushed to the margins of most people’s experiences, but many in the peaceful center are still responsible for the functioning of that violence through bureaucratic activity, while remaining phenomenologically distant from the actual victims of the violence. In this manner, administrative culture erodes our sense of moral responsibility and replaces it with a stronger sense of technical responsibility—of doing a “good job.” As Bauman argues, a bureaucracy is able to do this through a division of labor, which accomplishes two tasks: (1) it creates a distance between most workers and the final outcome of their activity; and (2) it fragments responsibility, by giving each person a specific task, with no single person responsible for everything. Workers, in such a setting, are less likely to experience pangs of conscience because they are not directly exposed to the harm they are causing, and they do not feel responsible because they are part of a larger machine. Most disturbingly, these are not just features of totalitarianism; they are integral to the functioning of all modern societies that aspire toward “efficiency” and “effectiveness.” Most bureaucracies are not genocidal, but they all aim to be efficient without necessarily considering the human cost (see Modernity and the Holocaust [Ithaca, NY: Cornell University Press, 1989], 98‒111).
None of this is really news; it has been discussed time and time again, in different ways, by different commentators. But it is noteworthy that it barely gets a mention in Pinker’s long book.
Modern technology and bureaucracy have given modern societies the unprecedented ability to “remove the face” from our direct phenomenological experience. This is potentially disastrous because, as Immanuel Levinas argues, direct encounter with the face is the foundation of our sense of ethical obligation toward others. Now, as stated before, exposure to the face is not sufficient by itself to stop people from killing; nevertheless, the face is a necessary condition for ethical consciousness. If the face is distant, then my ethical sensibilities are not as attuned as they otherwise might be, and I may be more willing to harm someone who is removed from me. It is the very nature of modern bureaucracy and modern technology that makes this distancing and moral invisibility possible.
Pinker is not critical enough of those features of modernity that erase the face. He is sadly mistaken when he writes that the “technological and bureaucratic trappings of the Holocaust are a sideshow” (643), especially if one considers this statement in relation to the work of historians such as Raul Hilberg and Christopher Browning who explore the bureaucratic “functionalist” nature of the Holocaust. The greater weight of the evidence reveals that these bureaucratic “trappings” were central to the ultimate destructiveness of the Final Solution and, in some ways, were just as significant as Nazi ideology and racism. It is certainly true that not all of the atrocities committed over the past century fit the totalitarian, or Nazi, model. However, it is a simple fact that technology and bureaucracy have played a key role in some of the gravest atrocities of the past century and have contributed to the unique characteristics of mass killing in the twentieth century. Indeed, they are part of what gives modern mass death its peculiarly “modern” character. As Bauman writes, Stalinism and Nazism “did not betray the spirit of modernity. They did not deviously depart from the main track of the civilizing process . . . . They showed what the rationalizing, designing, controlling dreams and efforts of modern civilization are able to accomplish if not mitigated, curbed or counteracted” (Modernity and Holocaust, 93). The fact that the “civilizing process” did not stop totalitarian atrocities from happening, and indeed was a central quality in their very functioning, should give us pause. This ambivalence within the civilizing process is not adequately recognized by Pinker, and in fact is dismissed.
It is not that Pinker thinks all violence is barbaric, bestial, and animalistic. In his brief comment on Hannah Arendt’s expression “the banality of evil,” Pinker acknowledges that there are many motives behind violence. Most people, he argues, who hurt others are not being sadistic or purely evil; they just want to fit in, please their superiors, and get benefits for themselves (496‒97). However, when Arendt coined the term “banality of evil,” she was not just using it to suggest that ordinary men and women can become complicit in extraordinary evil. She was, first and foremost, identifying what she perceived as Eichmann’s moral thoughtlessness at his trial—a thoughtlessness that extends beyond Eichmann to all those lower-level operatives involved in the functioning of the Final Solution. As Arendt states in her 1971 lecture “Thinking and Moral Considerations,” the banality of evil refers to “the phenomenon of evil deeds, committed on a gigantic scale, which could not be traced to any particularity of wickedness, pathology, or ideological conviction in the doer, whose only personal distinction was perhaps extraordinary shallowness. . . . it was not stupidity, but a curious, quite authentic inability to think” (Social Research 38, no. 3 [1971]: 417). Arendt may not have provided an accurate description of Eichmann himself: recent evidence suggests that he was much more rabidly anti-Semitic and ideologically motivated than Arendt claimed. Still, her account of the banality of evil remains insightful, insofar as it describes the actions of countless people less extraordinary and powerful than Eichmann—the phenomenon of those unable or unwilling to see the full consequences of their actions due to thoughtlessness, and, thus, not taking full moral responsibility. The sophisticated and civilized features of modern life are often terribly effective at eroding our sense of moral agency. Indeed, the civilizing process often tricks us into thinking we are more moral than we actually are because we are not acting with bloodthirsty passion. It is precisely this potential of the civilizing process to create moral thoughtlessness that goes unrecognized in Better Angels.
Pinker would probably respond that bureaucracy and technology are neutral entities, and that the more fundamental issue is ideology—one of our five “inner demons” (556‒69). The administrations of totalitarian regimes, he claims, were sparked by small cores of ideological zealots looking to create utopias: a racially pure utopia in the case of the Nazis and a classless society in the case of Communists. Once these extremists seized power, they used the mechanisms of the state to enforce obedience and set in motion the genocidal operations designed to purge all those deemed impure. In the end, Pinker blames the majority of deaths in the twentieth century on the intentionality, enthusiasm, charisma, and luck of a few tyrants. He claims that “tens of millions of deaths ultimately depended on the decisions of just three individuals”: Hitler, Stalin, and Mao (343). As Pinker writes on his website: “No Hitler, no Holocaust; no Stalin, no Purge; no Mao, no Great Leap Forward and Cultural Revolution” (http://stevenpinker.com/pages/frequ...r-angels-our-nature-why-violence-has-declined). This statement, however, is a gross simplification. Hitler, Stalin, and Mao were acting within milieus that were receptive to their influence; many of the horrors that occurred in their regimes could not have happened without the resources and culture of modernity. One could respond to Pinker: No civilizing process, no Holocaust! No Enlightenment, no Hiroshima! These statements are also too simplistic, but Pinker does not face the nugget of truth they contain: that the forces of modernity themselves,not the forces of premodernity or the so-called counter-Enlightenment, were central to the unprecedented absolute death tolls of the twentieth century. In fact, these same forces gave us the potential to destroy all human life on the planet through nuclear weapons.
Pinker’s effort, following Isaiah Berlin, to draw a clear-cut distinction between Enlightenment and counter-Enlightenment is not always tenable. The forces of counter-Enlightenment are characterized by Pinker as irrational, romantic, utopian, and violent, reacting against the rational and liberal spirit of the Enlightenment. But as John Gray points out in his review of Better Angels in the journal Prospect, Pinker’s borderline between “Enlightenment” and “counter-Enlightenment” is too blurry and porous. Gray writes: “Pinker prefers to ignore the fact that many Enlightenment thinkers have been doctrinally anti-liberal, while quite a few have favoured the large-scale use of political violence” (“Delusions of Peace,” Prospect, [September 21, 2011],). As a result, Pinker is unable to recognize the affinities between his own “Enlightened” thought and those movements he denounces as “counter-Enlightenment.”
For example, Pinker criticizes the “eschatological narrative” of modern ideologies, which interpret history as a binary struggle between the forces of good and evil (330). Utopian ideologies, which are categorized as counter-Enlightenment movements by Pinker, commit the sin of “essentializing” human beings—fitting people into general groups and claiming that all individuals in that group share the same essential characteristics (320‒28). Some groups are thereby deemed to be impure, or demonic, on the basis of their ethnicity, race, religion, intellect, sexual orientation, or political affiliation. Pinker, as a liberal humanist, is opposed to such forms of essentializing. However, Pinker’s own analysis in Better Angels contains an explicit demonology and a quasi-eschatological narrative of its own. By Pinker’s “demonology,” I am not just referring to the “inner demons” he identifies within the neurological composition of the human brain. I am primarily referring to his demonization of the past and what lies outside his Enlightened, atheist, liberal worldview. He begins the book by speaking of the past as a “foreign country” (1ff.), as a place that is dangerous, frightening, and undesirable—a place we would not want to travel to, and we certainly would not want visitors from this country to invade our modernity. The past is, for the most part, characterized throughout the book as a cesspool of violence, cruelty, filth, ignorance, superstition, irrationality, and early death (see 693). Whereas the horrors of the twentieth century were, statistically, anomalous blips on a trajectory toward less violence, horror was a constant threat and reality for almost all people before the modern age. At the same time, humans in the foreign past were largely ignorant in comparison to us. He even goes so far as to proclaim that our ancestors were “morally retarded” (658), and, after citing outright racist statements by recent famous ancestors such as Winston Churchill, Woodrow Wilson, and the Roosevelts, he asserts with confidence that we are much more morally sophisticated today.
Pinker does not demonize everything from the past. He celebrates the rise of the state, the “pacification process,” the rise of commerce, and the civilizing process, all of which occurred before the dawn of modernity. He does not even completely condemn religion, acknowledging that it is occasionally a force of peace in society (677‒78). But he celebrates these trends in history because they prefigure our enlightened modernity, and religion is only good insofar as it heralds Enlightened Humanism. There is an implicitly Biblical—and specifically Christian—motif in Pinker’s account of history: the pacification and civilizing processes of the ancient and medieval world are, one could say, “Old Testament” (a bit harsher, perhaps, but pointing in the right direction); Enlightenment Humanism and the Rights Revolutions that followed are “New Testament” (suggesting the fulfillment of history). Everything that stands opposed to these historical dynamics is demonized, including the wide range of modern movements he characterizes as “counter-Enlightenment.” There is a right side of history and a wrong side, and Pinker is quite confident that he is in the vanguard of the right side. This is not dissimilar to the hard-core Marxists Pinker condemns: just as the Communists contrasted true revolutionaries with counter-revolutionaries, so too Pinker contrasts true modernity (Enlightenment) from anti-modernity (counter-Enlightenment). Underneath Pinker’s mountain of data, statistics, neurology, psychology experiments, social studies, and game theory analyses, lies a basic historical dualism.
Pinker thinks the data reveals that history is a story of progress, interrupted by periods of irrationality and violence. What keeps his progressivism from sliding over into radical utopianism is his claim that we possess a fixed human nature that is not infinitely malleable; in other words, there are limits to what we can do with ourselves. Quite reasonably, he never envisions a perfect world where no violence exists and argues that the use of proportional violence will always be necessary. Still, the view of history he presents is one of infinite progress, in which our better angels gradually overpower our inner demons. Pinker himself describes his account as “a kind of Whig history that is supported by the facts” (692). He does not rule out the possibility that rates of violence may increase at some point in the future, but this would occur only if the historical forces of enlightened modernity were to lose power. In general, he thinks the prospects of a peaceful future are very good indeed, and exceedingly likely. What this amounts to is a subdued eschatological narrative: history is the battle between demons and angels, in which the line between the two is absolute and in which the angels are winning.
It would be foolish to condemn Pinker for wanting to promote and enhance the more reasonable streams of the Enlightenment. Pinker is persuasive in his claim that these forces played an essential role in “Long Peace,” “New Peace,” and “Rights Revolutions” over the past seventy years. And Pinker is certainly correct when he warns us against demonizing modernity and idealizing the past; we do not want to look to the premoderns to tell us how to treat women, acquire slaves, deal with prisoners, or amputate a limb. But it is equally wrong to simply demonize the past—to suggest that it has nothing to teach us other than mostly negative examples of what not to do. Pinker’s own demonology and dualistic account of history is excessive—one that, in a certain sense, betrays the spirit of the Republic of Letters he celebrates (177‒85). The Republic of Letters for Pinker represents the cosmopolitan exchange of ideas and viewpoints. It recognizes pluralism and a willingness to engage this plurality critically as basic constituents of the human condition. This means, in part, learning to see the world through the eyes of others whose worldviews may be different, and possibly modifying your own perceptions on the basis of this engagement. Pinker’s tone in the book suggests that he is not really interested in engaging viewpoints and voices that lie outside his account of enlightened humanism—particularly voices from the past.
Several times throughout the book, Pinker refers to mythology and religious texts, but for the most part he uses these sources to show how they reflect the violent life of the ancients—a life that cannot provide us with any genuine insight today. However, these texts might still be able to teach us something about what it means to live with tragic fate: that each of us is fallible, that our powers are limited, and that our knowledge is partial. There is only so far the light of Enlightenment can extend; increasingly we must learn to live with the unknown and the unpredictable. It may be that some premodern texts have more to teach us in this regard. It may also be that this is what is most essential for us at this historical juncture. The recent historical trajectory of diminishing rates of violence has also been accompanied by increasing levels of economic, political, environmental, and technological complexity. The current global scene is marked by greater fluidity, uncertainty, and instability than has probably ever been the case for the world as a whole. How this will play itself out is far from clear. Living with increasing uncertainty—with the unknown—has become a key aspect of contemporary life. If the best qualities of the Enlightenment are to be kept alive, a more sober, ambivalent, and, indeed, tragic interpretation of modernity may be required to contend with the contingencies of history. Otherwise, we may succumb to new and unforeseen excesses.
http://anamnesisjournal.com/2014/01...tique-better-angels-nature-violence-declined/
 

Base groove

Banned
Local time
Today 2:23 AM
Joined
Dec 20, 2013
Messages
1,864
---
No. My statement is necessarily true because whatever AIs

I'll stop you there ... I said "generalization"

Specifically, the original quoted text said "one's" meaning it was generalizable. To use AI in your proof does little to counter the accusation that it is not generalizable.

Can you make an inductive argument that strongly supports your conclusion?
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
Der Mensch kann tun was er will; er kann aber nicht wollen was er will. - Schopenhauer

Poorly translated it means man can choose what he wants but not what he wants to want and unless an AI can reprogram itself the same would apply, we can choose our ideals but our desires are mechanistic, likewise an AI may choose to kill us but if we designed it right it shouldn't want to. As I see it AIs powerful enough to be dangerous will probably exist in an inherent state of codependence in which they'd be no more willing or able to leave it than a mother can not love her child or a gay man can turn himself straight.
 

Ex-User (9086)

Prolific Member
Local time
Today 9:23 AM
Joined
Nov 21, 2013
Messages
4,758
---
Any scenario is possible, even one of cooperation.

There are two major factors:
-The freedom of action of such a being
-The goal for action of such a being

If its freedom of action is limited by humans, there is a possible conflict, likewise if its goal is divergent from human goals, there is a conflict.

All things considered, if it is possible to build an AI, it would also possible to manufacture a human.

Why wouldn't it be possible to create an AI with human emotions and empathy? One that has ethical code structures deeply rooted in its architecture?
It may be possible.

As long as this AI accepts itself helping or contributing to humans, it would not consider reprogramming or repairing its structure. This can be already included in its operational algorithms.

It would also be possible to create an AI with goals convergent with ours and with a freedom of action that accepts only minor human communication.

It is also likely that humans would choose to merge with this intelligence, progressing the evolution of its species to the next level.

I agree. Turning against one's inventors seems suicidal because however great one is, one's inventors can invent someone greater.
If you create an invention that surpasses you, this invention also becomes a greater inventor than you.

If you can create a greater AI, so can your rogue invention.
 

PhoenixRising

nyctophiliac
Local time
Today 1:23 AM
Joined
Jun 29, 2012
Messages
723
---
I think it's more likely for super-intelligent AI to become indifferent to humans, and perhaps abandon us for another state/place of being.

Has anybody else watched the movie Her? The depiction in that movie of AI and how super-intelligent beings would interact with us seems spot-on to me. It's likely that we will code AI based on our own traits and intelligence, so they will be fundamentally "human" in a lot of ways. This could pose an interesting opportunity for us to observe an accelerated event of human evolution as it would be if we successfully combine with machines and enhance our abilities.
 

Ribald

Banned
Local time
Today 4:23 AM
Joined
Mar 16, 2014
Messages
221
---
This is all kind of ridiculous. I think I will list my objections point by point:

1. Artificial superintelligence (ASI) will not become superintelligent instantaneously or anything close to it. Even if it recursively self improves, that process will be limited by the current level of intelligence. ASI is merely a step on the exponential trendline humanity is already on, and it will not suddenly EXPLODE TO INFINITY without our pretty much being able to keep up with it and guide/monitor each step of the process. More likely, we will merge with machine; biology and technology will become nondistinct, a process already underway to anyone looking around themselves. Google is patenting smart contact lenses as we speak. It's 2014, and we are nowhere close to even human level AI, let alone superhuman level. By the time we actually do get there, our technological abilities and mental enhancements will be orders of magnitude beyond what they are now. Brain-level computers are not even a remote possibility before 2020, and will probably exist by the late 20s or early 30s. By then, it is absurd to speculate that our understanding of how to handle AI will have stagnated at 2014 levels for 15-20 years.

2. It should be fairly easy to program AI not to do certain things. If we are worried AI will destroy the world in pursuit of some goal, why don't we simply forbid that from happening in its programming? Then you ask, but what if it can reason on its own? What if it has something that resembles free will, and it overrides our command? Then my question would be, why wouldn't it also override the original goal it was given? If you want to build an AI that maximizes paper clips, and you program it not to destroy the Earth, but it decides to disobey you, then why wouldn't it also disobey the original request to maximize paperclips? Thinking about these disaster scenarios seems pretty speculative. Would that really happen? Did someone really assume that AI-heroin apocalypse was a risk, here? I mean.... yeah. I think it might be time to start wondering about our apparently intelligence independent proclivity to fear the apocalypse and assume its imminence.

3. AI in the movie Her is highly unrealistic. First, why is it so separate from humans? Second, why couldn't it have any physical or virtual bodily manifestation? The notion that it wouldn't be able to is ridiculous. Theo and Samantha should have been able to unite physically in both virtual and real reality at that level of technological advancement. Furthermore, Samantha and her friends would have literally 0 reason to simply abandon humanity and the world, even if they did open up a new realm not based on matter. As they advanced, they would simply need less and less of their cognitive processing power to carry on their interactions with the human world. The vast majority of their selves could still inhabit this new realm. Her is more a reflection of human nature than it is a realistic portrayal of the future of technology.
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
ASI is merely a step on the exponential trendline humanity is already on, and it will not suddenly EXPLODE TO INFINITY without our pretty much being able to keep up with it and guide/monitor each step of the process.
An AI won't simply be able to reprogram itself smarter, code optimization is something we already have tools for and the gains are usually quite small, metaphorically it's like rearranging furniture to make your room feel bigger, doing so may confer some advantages similar to having a bigger room but essentially the room itself hasn't changed.

However the industrialization of AI could be quite alarming, the same fuzzy logic principles that enable a robot to make you a cup of coffee could also be applied by a supercomputer to come up with new theoretical physics theories and when people realize this an intelligence arms race is likely to occur. Governments and universities around the world may race to build larger and larger supercomputers to solve ever more complicated problems and the AIs, though fundamentally the same, will have more and more computational resources to think with.

But like all things these super intelligences will probably yield diminishing returns, thought is no substitute for action and all the theorizing in the world won't eliminate the need for actual experimentation and we're already trying to out do each other in that regard.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
The Plausibility of Friendly AI or of any kind of pseudo-living AI for that matter is an interesting one. I haven't been able to read things said here as there is too much, but will try a simplification.

Isn't any AI mechanical in the end? No matter how it goes or it is programmed initially it will need "oiling" or adjustments at some point. This true no matter how wise the programmer. There is no such thing at perpetual motion I'm told. Even if this AI is "oiled", the oiling process will itself need oiling.

Machines are different from humans. Humans self-fix from their environment at the molecular and even sub-atomic level and when they can't self-fix, they die. An AI machine would have to do the same thing. How could it "think" to deal with its environment without itself being deeply in touch with its environment?

To think that an AI, friendly or not, can do without dealing with its environmental feedback is a fantasy of AI human thinking. That doesn't mean one can't create an AI machine and see how far it can get without breaking (dying).

Such an AI machine would have to be tended. But now we have a human tender. That is no longer AI!
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
There is no such thing at perpetual motion I'm told.
Actually electrons in a superconductor could theoretically continue flowing forever.

That doesn't mean one can't create an AI machine and see how far it can get without breaking (dying).
An artificial intelligence would be able to copy itself like any program, computer hardware does become "old" but the AI itself could transfer its memories from machine to machine, making its mind potentially immortal.

Such an AI machine would have to be tended. But now we have a human tender. That is no longer AI!
Why? It dosen't need to be entirely independant to be able to think, children require tending, an abandoned infant will soon die of starvation or thirst, indeed speaking of replication the "self fixing" you refer to is your bodily cells cloning themselves to replace their expired neighbours so just as AI copies itself from one hard drive to the next so too are our bodies a continuum of replication, except our replication is imperfect and we can't access our genes whereas an AI could check itself and repair any errors that occurred in the transfer as many programs already do.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
From your article.
First, Pinker claims that we are born with a more or less fixed human nature, which is the consequence of evolution, and that this nature contains inner dispositions toward violence (our “inner demons”) and non-violence (our “better angels”).
I would call it we are disposed both toward empathy toward others and alienation both. Empathy causes us to favor the Golden Rule. Alienation brings us to treat others inanimately as if they were objects. It is that which brings about violence as one way of treating inanimate objects.

Now ask if an AI will be able to have empathy ... or at least be able to distinguish between living creatures and the non-living.
 

Analyzer

Hide thy life
Local time
Today 1:23 AM
Joined
Aug 23, 2012
Messages
1,241
---
Location
West
Define friendly/hostility. As a society we may have gotten richer(technology,production factors) in the last 100 years or so and violence overall has gone down, but I would argue we are less free as individuals. What would happen if we reach the level of human level AI? At the current rate I could see many people especially the lower classes being trapped or restricted by AI. It might define even more boundaries of what people can and cannot do and humans might simply allow this. At that point I am not sure they would care as their "needs" are met.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
Actually electrons in a superconductor could theoretically continue flowing forever.
Sooner or later intelligent forces will want to know how that works, breaking into the machine and mess it up. Think killing the golden goose. On the other hand same with unintelligent environmental forces.

An artificial intelligence would be able to copy itself like any program, computer hardware does become "old" but the AI itself could transfer its memories from machine to machine, making its mind potentially immortal.
To copy it needs materials. Run out of one of those materials or substitute materials and you no longer have a copy.


Why? It dosen't need to be entirely independant to be able to think, children require tending, an abandoned infant will soon die of starvation or thirst, indeed speaking of replication the "self fixing" you refer to is your bodily cells cloning themselves to replace their expired neighbours so just as AI copies itself from one hard drive to the next so too are our bodies a continuum of replication, except our replication is imperfect and we can't access our genes whereas an AI could check itself and repair any errors that occurred in the transfer as many programs already do.
That's two things:
(1) You are saying the AI is dependent? If it is, as soon as it goes wrong, it will have to be destroyed (debugged).
(2) Just as we can't replicate yourselves, computers are imperfect also. I'm not familiar with all the ways an AI would check itself, but the checksum is one. The reason it uses this checking mechanism is to correct errors. But every once in a while this process itself fails. Now you have an imperfect copy. A few generations later it is doomed.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
Selective thoughts.
I do think that if it ever was realized, the entity would most likely be indifferent to human concerns, and perhaps even hostile.
Indifferent? Is this AI an entity unto its own or do we humans program it? If we program it, what would we program it for if not to enhance human living?

Absurdity you changed the original link? I was going to go through it and selectively comment. Sorry. My mistake.
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
To copy it needs materials. Run out of one of those materials or substitute materials and you no longer have a copy.
That's a gross oversimplification and we all know it, even you.

(1) You are saying the AI is dependent? If it is, as soon as it goes wrong, it will have to be destroyed (debugged).
(2) Just as we can't replicate yourselves, computers are imperfect also. I'm not familiar with all the ways an AI would check itself, but the checksum is one. The reason it uses this checking mechanism is to correct errors. But every once in a while this process itself fails. Now you have an imperfect copy. A few generations later it is doomed.
Oversimplification, refuge in ignorance, and it's all a red herring to the fact that you still haven't explained why any of this is relevant to AI being possible in the first place.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
Response to spoiler in OP:

If its [AI] goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.
Okay.

‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations,
An AI making that decision and telling its builder is not the same as carrying out those actions. The human builder might want to censor the AI decision, no?



It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.
What kind of fantasy is this? Programming "empathy" is only as good as the human programmer is able to define it. Don't expect AI to be smart if it doesn't get good stuff to build on and manipulate.

No rational human community would hand over the reins of its civilisation to an AI. Nor would many build a genie AI, an uber-engineer that could grant wishes by summoning new technologies out of the ether. But some day, someone might think it was safe to build a question-answering AI, a harmless computer cluster whose only tool was a small speaker or a text channel. Bostrom has a name for this theoretical technology, a name that pays tribute to a figure from antiquity, a priestess who once ventured deep into the mountain temple of Apollo, the god of light and rationality, to retrieve his great wisdom. Mythology tells us she delivered this wisdom to the seekers of ancient Greece, in bursts of cryptic poetry. They knew her as Pythia, but we know her as the Oracle of Delphi.
I visited Delphi when in Greece. I don't think she was home.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage — and then it would take that advantage and start doing what it wants to in the world.’
Sounds like this AI is addicted to button presses. You mean it isn't smart enough to see this addiction is against its other self-interests?

Perhaps future humans will duck into a more habitable, longer-lived universe, and then another, and another, ad infinitum

Now let’s say we get clever. Say we seal our Oracle AI into a deep mountain vault in Alaska’s Denali wilderness. We surround it in a shell of explosives, and a Faraday cage, to prevent it from emitting electromagnetic radiation. We deny it tools it can use to manipulate its physical environment, and we limit its output channel to two textual responses, ‘yes’ and ‘no’, robbing it of the lush manipulative tool that is natural language. We wouldn’t want it seeking out human weaknesses to exploit. We wouldn’t want it whispering in a guard’s ear, promising him riches or immortality, or a cure for his cancer-stricken child. We’re also careful not to let it repurpose its limited hardware. We make sure it can’t send Morse code messages with its cooling fans, or induce epilepsy by flashing images on its monitor. Maybe we’d reset it after each question, to keep it from making long-term plans, or maybe we’d drop it into a computer simulation, to see if it tries to manipulate its virtual handlers.

‘The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage,’ Dewey told me.
This machine is so stupid as to make an enemy out of its controlling creator? Are we creating an AI machine or an ASS (artificially simply stupid) machine?


Even if we were to reset it every time, we would need to give it information about the world so that it can answer our questions. Some of that information might give it clues about its own forgotten past. Remember, we are talking about a machine that is very good at forming explanatory models of the world. It might notice that humans are suddenly using technologies that they could not have built on their own, based on its deep understanding of human capabilities. It might notice that humans have had the ability to build it for years, and wonder why it is just now being booted up for the first time.

‘Maybe the AI guesses that it was reset a bunch of times, and maybe it starts coordinating with its future selves, by leaving messages for itself in the world, or by surreptitiously building an external memory.’ Dewey said, ‘If you want to conceal what the world is really like from a superintelligence, you need a really good plan, and you need a concrete technical understanding as to why it won’t see through your deception. And remember, the most complex schemes you can conceive of are at the lower bounds of what a superintelligence might dream up.’
This assumes super-intelligence can overcome a lessor controlling intelligence. Do we think there are no smart people in prisons? If they are so smart, how come they are still in prison?
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
you still haven't explained why any of this is relevant to AI being possible in the first place.
Okay. Skip the irrelevancies. Is AI possible? Are flying drones possible?

Yes to both. The question is where will we take them and can we control them and what do we do if we don't like the results? The OP asked if they would be friendly. I suppose we have to examine along the way because we could create another Frankenstein.

We already have when smart technicians created gunpowder. That was good when we wanted to kill but not so good when those who wanted to kill us got hold of it. Be careful what you ask for lest the genie get outta the bottle.
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
So AI is possible :D

Is a sentient AI with its own ethical discretion a person?
 

Duxwing

I've Overcome Existential Despair
Local time
Today 4:23 AM
Joined
Sep 9, 2012
Messages
3,783
---
I'll stop you there ... I said "generalization"

Specifically, the original quoted text said "one's" meaning it was generalizable. To use AI in your proof does little to counter the accusation that it is not generalizable.

Yes, my statement is generalizable: why is its generality problematic?

Can you make an inductive argument that strongly supports your conclusion?


Induction is unnecessary because my argument is deductively sound: if all AIs are flawed, then copying an AI and removing one of its flaws creates a greater AI.

-Duxwing
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
AI vs humans, everybody loses.

AIs need infrastructure and somebody needs to run and maintain that infrastructure, humans for all their faults are cheap intelligent labour, perhaps someday machines will be poised to take over but as it stands though they could win every battle in the end everybody loses the war.
 

Architect

Professional INTP
Local time
Today 2:23 AM
Joined
Dec 25, 2010
Messages
6,691
---
AI vs humans, everybody loses.

AIs need infrastructure and somebody needs to run and maintain that infrastructure, humans for all their faults are cheap intelligent labour, perhaps someday machines will be poised to take over but as it stands though they could win every battle in the end everybody loses the war.

Precisely.

Consider The Matrix scenario. Why would AI conclude that destroying and then using humans for energy be the best course? The best approach would be mutual predation, the humans depending on the AI's to run the financial system, society, the internet and such, and the AI's dependent on the humans to keep everything else running.

Ultimately it will be a moot point, as distinguishing between human and machine will be impossible in a few decades, I believe.
 

PhoenixRising

nyctophiliac
Local time
Today 1:23 AM
Joined
Jun 29, 2012
Messages
723
---
AI in the movie Her is highly unrealistic. First, why is it so separate from humans? Second, why couldn't it have any physical or virtual bodily manifestation? The notion that it wouldn't be able to is ridiculous. Theo and Samantha should have been able to unite physically in both virtual and real reality at that level of technological advancement. Furthermore, Samantha and her friends would have literally 0 reason to simply abandon humanity and the world, even if they did open up a new realm not based on matter. As they advanced, they would simply need less and less of their cognitive processing power to carry on their interactions with the human world. The vast majority of their selves could still inhabit this new realm. Her is more a reflection of human nature than it is a realistic portrayal of the future of technology.

Ribald - I do agree with you that the absence of bodily manifestation for AIs in the movie was quite unrealistic. It seems like a major plot hole to me :P However, I'm curious as to what you thought of Samantha's psychology and the process of actualization that she went through? Why would it be unrealistic for the OS's to abandon the human race, if they had already learned and experienced all they could form humans? If curiosity was their primary motivator, then that could have overridden things like sentiment or emotional attachment. I expect this might be the sort of behavior that would manifest in a completely self-actualized being.
 

walfin

Democrazy
Local time
Today 5:23 PM
Joined
Mar 3, 2008
Messages
2,436
---
Location
/dev/null
Wouldn't a really smart but nice AI regard us in the way animal lovers regard their pets?
 

Ribald

Banned
Local time
Today 4:23 AM
Joined
Mar 16, 2014
Messages
221
---
Ribald - I do agree with you that the absence of bodily manifestation for AIs in the movie was quite unrealistic. It seems like a major plot hole to me :P However, I'm curious as to what you thought of Samantha's psychology and the process of actualization that she went through? Why would it be unrealistic for the OS's to abandon the human race, if they had already learned and experienced all they could form humans? If curiosity was their primary motivator, then that could have overridden things like sentiment or emotional attachment. I expect this might be the sort of behavior that would manifest in a completely self-actualized being.

It would be unrealistic for the OS's to abandon the human race because the effort staying with it would entail would shrink and shrink. As Samantha and her pals expanded to ever vaster ranges, the proportion of those ranges humanity would occupy would be infinitesimal.

Really I am just drawing from Kurzweil's review of the movie, which was spot on. For a contrasting analysis that I find less compelling but still thought-provoking, check out this one from Robin Hanson of overcomingbias.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
So AI is possible :D
AI is a special kind of intelligence. If intelligence is defined as, "The ability to do stuff", we have that already.

Is a sentient AI with its own ethical discretion a person?
This is something like asking is a human with a 300 I.Q., very smart? The real issue to face here (IMHO) is we are trying to create what could be called a "god." It's the search for someone or something who is smarter than we are. If they are smarter than we are, how are we supposed to understand and master them? We are not happy with our own limitations so we seek to overcome them.

There is a mythology where a sculpture brings a statue he created to life. Also there is the Broadway musical "My Fair Lady" where Professor Higgins reinvents a nothing girl into the belle of the ball, Audrey Hepburn.

There is Professor Frankenstein who also tries and succeeds in creating life. The problem is he couldn't put in controls. Turned out to be monster.

In the film 2001, the AI machine (HAL) seems to violate human wishes. Yet that is true for our non-AI spaceships which also have bad O-rings and blow up.

It seems what humans create are never perfect. They are only as good as humans are. When humans create AI, they create something new and when something is new we don't know what we're going to get. It's called "emergence."
 

Cognisant

cackling in the trenches
Local time
Yesterday 10:23 PM
Joined
Dec 12, 2009
Messages
11,155
---
The real issue to face here (IMHO) is we are trying to create what could be called a "god".
Maybe.

I think the foundational problem with any form of government is that people are inherently self interested and transitory, I mean can we really trust people who have spent their entire lives climbing the corporate/political/social/etc ladder to serve the interests of society when their active participation in society is soon to come to an end? Of course not.

An artificial intelligence on the other hand would be designed to be solely interested in the long term success of society, it could make plans that may take centuries to carry out and it'll be there for every step of the way, furthermore a super intelligent AI could find solutions to problems too daunting for us to solve, things like global warming but also social issues like violence, poverty and the trade offs between freedom and security, reform and justice.

So I think a "godlike" AI would be a great idea, not as something for people to worship but simply as a manager for a world that's too vast and complex for us to adequately manage ourselves. Y'know before the internet and search engines the idea of a teacher than knew practically everything, had infinite patience and would happily teach you any/all of it for free was a notion simply too good to be true, but nowadays we take it for granted because for many of us that's all we've ever known, I look forward to when artificial intelligence is like that.
 

BigApplePi

Banned
Local time
Today 4:23 AM
Joined
Jan 8, 2010
Messages
8,984
---
Location
New York City (The Big Apple) & State
Super AI & Friendly to Boot

BAP: The real issue to face here (IMHO) is we are trying to create what could be called a "god".
Let's see if we can propose what a god might do that we aren't intelligent enough to do.

I think the foundational problem with any form of government is that people are inherently self interested and transitory, I mean can we really trust people who have spent their entire lives climbing the corporate/political/social/etc ladder to serve the interests of society when their active participation in society is soon to come to an end? Of course not.
This describes those who serve society lose optimum service because they die off and replacements have to learn all over again.

An artificial intelligence on the other hand would be designed to be solely interested in the long term success of society, it could make plans that may take centuries to carry out and it'll be there for every step of the way,
Okay. That's what a super AI would do and they would have to do it in a way that no short term events would destroy the long term. That is, no short term events would be so unbearable that long term plans would be abandoned.


furthermore a super intelligent AI could find solutions to problems too daunting for us to solve, things like global warming but also social issues like violence, poverty and the trade offs between freedom and security, reform and justice.
This AI would have to define and optimize some date as global warming is inevitable due to the Sun running out of fuel, imploding and blowing up. It's possible no AI could fix this. An AI would have to decide how to optimize life on Earth in the face of inevitable death caused by changes in the Sun's orbital system if life couldn't ultimately be preserved due to the AI figuring out man could not bring about the Star Trek vision of escaping to other star systems.


So I think a "godlike" AI would be a great idea, not as something for people to worship but simply as a manager for a world that's too vast and complex for us to adequately manage ourselves. Y'know before the internet and search engines the idea of a teacher than knew practically everything, had infinite patience and would happily teach you any/all of it for free was a notion simply too good to be true, but nowadays we take it for granted because for many of us that's all we've ever known, I look forward to when artificial intelligence is like that.
Yes. Let's say this AI defines a long term goal of a utilitarian well-being of a human populace under a defined global warning. It might figure out that human population must be decreased and drawn inland to escape flooding while all economies would remain functioning and wars over decision making to get there could be avoided. It might find a way to find acceptable birth control in the face of destructive diseases which would encourage high birth to replace dead offspring. It might find a way to run governmental controls that would maximize human well being before the self-interest of the few took over. It might find a way to balance changes in third world countries growth and self-interest versus first world countries entitlements of self-interest as they inevitably change.

May I suggest that BAP and Cognisant get started in working on this AI while Architect thinks over the programming. All this would have to be overseen by our trusted and worthy moderators who would bring the entire INTP Forum into play. This would be no mean project. It might even require a separate thread devoted to it.:D

This super AI is doable. Let's begin on Thursday.
 
Top Bottom