• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Transhumanism: Preventing Hell on Earth.

Agent Intellect

Absurd Anti-hero.
Local time
Today 10:05 AM
Joined
Jul 28, 2008
Messages
4,113
---
Location
Michigan
I've been reading a book called "Radical Evolution" that lays out three scenarios about what may happen when the 'curve' of technological progress approaches the singularity. The first scenario is called Heaven, which has been discussed several other times on this forum. The second is called Hell, which is a popular theme in movies.

(I haven't gotten to the third scenario yet).

Anyway, I'm wondering if any other futurist or transhumanist enthusiasts have had any ideas for averting the Hell scenario. A quick rundown, using the form of GRIN technology (Genetic, Robotic, Intelligence, Nanotechnology) or as Kurzweil put it, GNR (Genetic, Nanotech, Robotics) since Robotics and Intelligence are basically the same thing:

Genetics:
The creation of a biological weapon that could potentially wipe out a large fraction of the human race (if not all of them). The book mentions a virus in Australia that infects mice, in which adding a single gene was able to make it 100% lethal, even to mice that had been vaccinated. Essentially, it would be easy for anyone with a decent level of knowledge in molecular biology to create a pathogen that can wreak untold havoc.

Nanotech:
Read about grey goo. Even if someone couldn't create their own self-replicating nanotech, it could be possible to infect existing self-replicating nanotech with viruses that cause them to replicate uncontrollably. Even if a grey goo scenario wasn't the case, if humans have nanotech within their bodies (for medical benefits), any malfunction (or infection with a computer virus) could cause these nanorobots to harm the host.

Robotics/Intelligence:
Watch movies like The Terminator or The Matrix or I Robot (or read the Ted Kaczynski manifesto). Essentially, the technology we create ends up owning us. Even if the strong AI wasn't violent, we might end up living in reservations, nothing more than pets for our robotic overlords.



The first two scenarios depend on something that we have an entire history full of empirical evidence to support: human nature. Let's face it, there are plenty of people who would either cause these scenario's to occur out of stupidity/ignorance, out of malice and hatred, or because they think they're doing the world a favor. The third scenario is a lot more up in the air, as it's essentially borne of our ignorance of what the machines will be like.

Another problem is that, as most of you probably know, technology does not always work the way we want it to (even 'tried and true' technology), and new, untested technology can have unforeseen consequences beyond even what is predicted in the Hell scenario.

So, the problem is, if a transhuman future is inevitable, or preferable, what could be done to prevent scenarios like this? Both optimists (Ray Kurzweil) and pessimists (Bill Joy) think that a singularity-like future is inevitable. While halting all progress, or even those fields most likely to be susceptible to abuse, would fix all of these, I don't think it's a viable option - you could never stop everyone from pursuing these advances.

Off the top of my head, here are a few things that could get ideas rolling:

1. Colonize space so that there are "backup" places to live in the case of the genetic/nanotech scenarios (not really a prevention, I guess).
2. More "aggressively" spread education to developing and third world nations (as well as the impoverished in developed nations). Empirically speaking, more educated people are usually not as violent as uneducated people.
3. Checks and balances: each part of GNR has control over the other two parts (as well as humans having control - a kill switch - over all three). For example, non-pathogenic bacteria that feed on nanotech could be introduced (and vice versa) as well as giving multiple AI's access to the nanotech network with their own kill-switches.
4. World tolerance. World peace is an unfeasible pipe-dream, but if there was a way to get nations to tolerate each other, it would at least prevent governments from mutually assured destruction. How this could be done is very tricky, but I think a world-wide free market commerce would be a place to start - who would want to kill off potential 'customers'?
 

Jordan~

Prolific Member
Local time
Today 3:05 PM
Joined
Jun 4, 2008
Messages
1,964
---
Location
Dundee, Scotland
Genetics:
And equally easy for someone else to engineer a solution, I suppose? How distant into the post-singularity future are we talking? Far enough and you wouldn't really expect viruses to be a problem. I'd emphasise the importance of sloughing off the "genetics" bit asap, it's not very good. Hijack biological processes as a delivery mechanism for better, artificial, non-organic cells, but I don't think there's much point in seeing how far we can stretch flesh and bone. Probably not much further than it's been stretched in our natural bodies.

Nanotech:
The implementation would take account of the risks, surely? Especially in a world where AI manages risks very carefully. E.g. nanomedicines would presumably do their job and break down harmlessly, or alter an existing organ to improve its functioning, etc. rather than staying in the body for a long time - more like drugs than tiny robots that live in side you, but drugs that can be programmed. The Royal Society dismissed grey goo as an impossible scenario. We probably will want self-replicating nanomachines for huge engineering projects eventually (turning planets into computers, Dyson spheres, etc.), but by then we're likely to be able to prevent the (actually quite limited) possibility of a grey goo scenario.

Robotics:
In Terminator, The Matrix, I Robot and Industrial Society and Its Future, technology takes over because it's better than humans and humans didn't merge with it. If we transcend humanity and become our own technology, it doesn't take over, it just converges with us. We'd be idiots to make things better than us without actually becoming those things, I think. Rather like a colony of ants building a human rather than working out to make themselves into humans. Careful programming and boundaries imposed on the range of thoughts an AI can have to deliberately preclude certain possibilities, or an external, unintelligent machine that checks for certain thoughts and shuts the AI down if it has them, could probably keep that in check anyway, but why make awesome robots when we could be awesome robots?

I don't see how any of these scenarios differ from the once-absolute-certainty of nuclear armageddon. I'm pretty sure we won't wipe ourselves out because, well, we're not that stupid. We've already had the ability to do so for quite a while and no one's done it yet, and we've actually gotten past the point where it looks likely that anyone will. So I'm not sure that human nature would lead us to oblivion. We can handle nukes, we can handle anything else - one form of total destruction is just as total as any other.

I wonder if when the first steel sword was forged, someone muttered darkly, "You'll destroy the world, you know..." Yes, our destructive capacity increases, but only at the same pace as our ability to keep it in check.

1. We may as well colonise the rest of the solar system once it's within our capabilities, anyway. At some point we'll have to think about moving to another one somehow, anyway, or rigging Earth up to be portable.
2. 'Uplifting' programs would be our responsibility, anyway. The transhuman future can't be exclusively for rich white people. To me, practical advantages would be incidental to our moral duty to ensure the universality of our technological advantages.
3. In a world with advanced AI, this doesn't seem like it'd be difficult to arrange.
4. I think post-scarcity war is very unlikely ever to occur, and that posthumans won't really have any need of society, and consequently of politics and government. Anarchy is just sort of implicit when every individual is wholly self-sufficient. Society will still probably be engaged in for, well, fun, sentiment and nostalgia, and for the advantages of cloud computing, but it won't actually be required. I'm also confident that world peace can be achieved by humans by cultural means. Globalisation - sadly, in a way, as diversity is fascinating - seems unstoppable; there should perhaps be an impetus to ensure that the global culture that emerges is one that abhors violence of any sort.
 

Artsu Tharaz

The Lamb
Local time
Tomorrow 2:05 AM
Joined
Dec 12, 2010
Messages
3,134
---
Robotics/Intelligence:
Watch movies like The Terminator or The Matrix or I Robot (or read the Ted Kaczynski manifesto). Essentially, the technology we create ends up owning us. Even if the strong AI wasn't violent, we might end up living in reservations, nothing more than pets for our robotic overlords.

I don't remember more than a brief mention of this in Ted's manifesto (at least in the literal way you are describing it). What he spoke out against are things that have been around for a long time already, e.g. ecocide, boredom and depression resulting from the removal of the power process, and the invention of neurotic values resulting from this.
 

Black Rose

An unbreakable bond
Local time
Today 8:05 AM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I'm betting on Post scarcity

By the late 2020s nano factories eliminate most manufacturing jobs but supply overabundance.
 

Agent Intellect

Absurd Anti-hero.
Local time
Today 10:05 AM
Joined
Jul 28, 2008
Messages
4,113
---
Location
Michigan
Genetics:
And equally easy for someone else to engineer a solution, I suppose?

Assuming a solution could be engineered quick enough. It took a while for a vaccine for swine flu to be engineered - imagine if it had spread more quickly and been more lethal. The logistics of containing a pandemic are colossal, and we haven't even faced a bad one yet in the genetic age.

How distant into the post-singularity future are we talking?

For genetic technology, I would say quite early on. I consider this to be a threat right now, and that it could only grow as time goes on - especially since now it will become possible to create completely novel organisms.

Far enough and you wouldn't really expect viruses to be a problem. I'd emphasise the importance of sloughing off the "genetics" bit asap, it's not very good. Hijack biological processes as a delivery mechanism for better, artificial, non-organic cells, but I don't think there's much point in seeing how far we can stretch flesh and bone. Probably not much further than it's been stretched in our natural bodies.

Getting through the age of genetic technology faster will reduce the probability of a biological disaster occurring for those who wish to augment themselves further, but that wouldn't help with much with the 'unenhanced'.

Nanotech:
The implementation would take account of the risks, surely?

That seems overly optimistic. I'm sure the implementation of most new technology has attempted to take account of risks, but there are people who are equally as innovative who will try to subvert such safety measures, as well as run-of-the-mill malfunctions and unforeseen 'bugs' that will pop up in the system. This would also assume that the engineers and scientists putting this together are not under time and funding constraints to get it done faster as opposed to carefully.

In an ideal world, yes, there would be multiple redundancies as safety measures, hundreds of simulations and field tests, and peer review from experts in numerous fields. But, a transhuman world is not going to be part of an ideal world, and the road to transcendence will especially not be done in an ideal world.

Especially in a world where AI manages risks very carefully. E.g. nanomedicines would presumably do their job and break down harmlessly, or alter an existing organ to improve its functioning, etc. rather than staying in the body for a long time - more like drugs than tiny robots that live in side you, but drugs that can be programmed. The Royal Society dismissed grey goo as an impossible scenario. We probably will want self-replicating nanomachines for huge engineering projects eventually (turning planets into computers, Dyson spheres, etc.), but by then we're likely to be able to prevent the (actually quite limited) possibility of a grey goo scenario.

Carefulness doesn't seem to be a good safety measure. Sure, it would help, but it's not very good at preventing anything. The biggest problem with a 'worldwide disaster' is that it's a "one strike and you're out" kind of scenario - no matter how careful things are, if something slips through, it's bad news. There is not little room for error, there is no room for error. This either requires that we can foresee every possible scenario, or that the technology is engineered in a way that it's fundamentally error proof. Even still, it does not address the problem of human nature.

Robotics:
In Terminator, The Matrix, I Robot and Industrial Society and Its Future, technology takes over because it's better than humans and humans didn't merge with it. If we transcend humanity and become our own technology, it doesn't take over, it just converges with us. We'd be idiots to make things better than us without actually becoming those things, I think. Rather like a colony of ants building a human rather than working out to make themselves into humans.

This would assume that we do merge with our technology before a strong AI is created. The problem is, merging out intelligence with technology requires an understanding of the human brain; one way to understand the human brain is to create functional artificial brains.

I would say that likelihood of strong AI coming before heavy brain augmentation or mind uploading is very high.

Careful programming and boundaries imposed on the range of thoughts an AI can have to deliberately preclude certain possibilities, or an external, unintelligent machine that checks for certain thoughts and shuts the AI down if it has them, could probably keep that in check anyway, but why make awesome robots when we could be awesome robots?

This becomes more philosophical. If a strong AI is a conscious, thinking entity, is it right for us to police their thoughts? If we are to do this, should we police the thoughts of highly augmented minds or uploaded minds from people who used to be organic? Is it even possible to do so - are minds (especially highly advanced minds) not adaptable enough to bypass such safety measures?

I don't see how any of these scenarios differ from the once-absolute-certainty of nuclear armageddon.

Nuclear armageddon is still a threat (although perhaps not as much as during the cold war). One thing that has prevented this thus far is that rogue states have not had access to such weapons for most of their existence - once again, the idea that educated people are generally less violent than uneducated people.

I'm pretty sure we won't wipe ourselves out because, well, we're not that stupid.

You are not that stupid, but that doesn't mean everyone is not that stupid. Especially when there are people in this world who think the end of the world is right around the corner, and a harem of angels are already warming up a spot for them in paradise.

We've already had the ability to do so for quite a while and no one's done it yet, and we've actually gotten past the point where it looks likely that anyone will.

How do you figure?

So I'm not sure that human nature would lead us to oblivion. We can handle nukes, we can handle anything else - one form of total destruction is just as total as any other.

The problem is, nukes take a lot of money, a lot of time, and very delicate engineering to build. Something like bio-weapon pandemic can be made by a molecular biology grad student with an angry disposition. Something like a nanotech disaster can be caused by a small oversight by the programmer, or a very enthusiastic suicide bomber with wifi.

I wonder if when the first steel sword was forged, someone muttered darkly, "You'll destroy the world, you know..." Yes, our destructive capacity increases, but only at the same pace as our ability to keep it in check.

Don't get me wrong, I'm not usually one to be a doomsayer. In fact, I eagerly await a transhuman age. But, I'm also no fool, and I think issues should be raised and addressed before they become a problem. As Ben Franklin said, an ounce of prevention is worth a pound of cure.

1. We may as well colonise the rest of the solar system once it's within our capabilities, anyway. At some point we'll have to think about moving to another one somehow, anyway, or rigging Earth up to be portable.

Slightly off topic (but still interesting): what would be a good way to go about colonizing the solar system - especially in the near future when such a measure could prevent the complete destruction of our species? Is space flight being privatized in America good or bad for space travel, and the further progress of space flight technology?

2. 'Uplifting' programs would be our responsibility, anyway. The transhuman future can't be exclusively for rich white people. To me, practical advantages would be incidental to our moral duty to ensure the universality of our technological advantages.

Not everyone is interested in upgrading, and not everyone is interested in being educated. I don't want to be the naysayer here, as I'm a huge proponent of spreading education (I think it would be the single best prevention measure for such disasters as stated above) but the logistics of such an undertaking are overwhelming to the point that, being realistic, it probably won't happen (at least it won't before any sort of singularity occurs).

3. In a world with advanced AI, this doesn't seem like it'd be difficult to arrange.

Once again, a small digression, but I'm curious as to what you think advanced AI could acheive (pre-singularity, but sufficiently advanced)?

4. I think post-scarcity war is very unlikely ever to occur, and that posthumans won't really have any need of society, and consequently of politics and government. Anarchy is just sort of implicit when every individual is wholly self-sufficient. Society will still probably be engaged in for, well, fun, sentiment and nostalgia, and for the advantages of cloud computing, but it won't actually be required.

Humans, by nature, are hierarchical organisms. Are you proposing that enhancement would eradicate this from our nature? Even in an anarchical world, would there still not be a drive for dominance and power from those who are slightly more advanced than others? Would transcendence eradicate our social nature, and therefore the power struggle that occurs within the social arena? Would there not still be commerce between individuals as new advances are constantly being made and traded for other advances?

There is also the fact that not everyone will become transhumans. Some will willingly opt out, for whatever moral, philosophical, or religious reasons they may have. And many others will not have the opportunity. Unless the entire world becomes wealthy and well connected in the next ten to twenty years, there will be plenty of people "left behind" - particularly in a capitalistic world.

I'm also confident that world peace can be achieved by humans by cultural means. Globalisation - sadly, in a way, as diversity is fascinating - seems unstoppable; there should perhaps be an impetus to ensure that the global culture that emerges is one that abhors violence of any sort.

How will world peace be achieved? Can it be achieved before the singularity? If not, how would post-humanism make achieving world peace any easier?
 

scorpiomover

The little professor
Local time
Today 3:05 PM
Joined
May 3, 2011
Messages
3,384
---
I'm not actually a futurist. But I can see why the problems exist. I've seen the same types of problems, from people who wear rose-tinted glasses. These types of problems usually come from people who just refuse point blank to acknowledge there are any potential dangers. Even if you CAN get them to acknowledge potential dangers, which is usually like dragging a crocodile covered in glue through sand, they start telling you why there is a solution which means that the problems will never occur. On the rare occasions when I've got them to admit that there is a potential danger, they act as if I just stabbed them. They deflate, and say something like "What else should we do? Just curl up and die?"

However, when I deal with people who currently have a positive attitude, but still currently have a realistic acceptance that some dangers do exist, things tend to work really well. Problems seem to vanish. They occur. But then someone says "We COULD do this", and we have a look, and when we try it, it usually works out quite well. Even when it doesn't work at all, which seems to be rare in these cases, people say "Oh, but we had a good time", or, "At least we tried it, and we know that it doesn't work."

So I would say that on average, if you want to have a good future, then take a realistically optimistic attitude to life. Think positively about what HAS been achieved, not what will. Think that most problems seem to have an unexpected solution. Acknowledge that some problems won't have a solution, but that's not usually the end of the world, and if it is, then you're dead anyway.

The point is, think more positively about the past, and more open to the unexpected about the future. That seems to work the best, at least, in my experiences.

So I guess that I would say, to avoid the potential problems, think about all the wonders that ancient technology gave us, and modern technology. But be open to the future. Technology may be our future. Or, we may discover that it has too many problems, and a different way is better. Or, we may discover that some technology is good, but not a lot. We don't know. Stay open to all those possibilities. Don't bank on one. Just stay open.

We can hope that technology is our future. We can pretend that it is. But if it isn't, that's an awful bitter pill to swallow, to have to live with the fact that you'll never get what you want. I'd rather enjoy life.
 

Agent Intellect

Absurd Anti-hero.
Local time
Today 10:05 AM
Joined
Jul 28, 2008
Messages
4,113
---
Location
Michigan
I'm not actually a futurist. But I can see why the problems exist. I've seen the same types of problems, from people who wear rose-tinted glasses. These types of problems usually come from people who just refuse point blank to acknowledge there are any potential dangers. Even if you CAN get them to acknowledge potential dangers, which is usually like dragging a crocodile covered in glue through sand, they start telling you why there is a solution which means that the problems will never occur. On the rare occasions when I've got them to admit that there is a potential danger, they act as if I just stabbed them. They deflate, and say something like "What else should we do? Just curl up and die?"

This is my fear. I think a lot of people who buy into the 'heaven' scenario have a religious zealotry for their vision of paradise, a conviction so strong that possible problems and logistics don't phase them. I think transhumanism is, in a very real sense, a religion. I can't deny that I have a hope for a transhumanist future that isn't completely unlike a christians hope about heaven.

But, I try to remain practical. I'm very skeptical about some of the claims that proponents of the heaven scenario make, particularly the idea of uploading the mind into a computer (like software). But I also think (or, perhaps, know) that technology will not solve all of our problems, and will in fact bring about new challenges. That's fine. I'm not looking for heaven on earth - the search for 'heaven' (or maybe for an atheist and a transhumanist, complete understanding) is much more rewarding to me, anyway.

However, when I deal with people who currently have a positive attitude, but still currently have a realistic acceptance that some dangers do exist, things tend to work really well. Problems seem to vanish. They occur. But then someone says "We COULD do this", and we have a look, and when we try it, it usually works out quite well. Even when it doesn't work at all, which seems to be rare in these cases, people say "Oh, but we had a good time", or, "At least we tried it, and we know that it doesn't work."

Problem solving and dreaming are very different things.

I will admit that I'm much more of a dreamer than a problem solver. I like to imagine how things could be, and don't often get down to the nuts and bolts of a problem. But, the fact that problems in needing of fixing do exist is not lost on me. I've always liked pitching topics like this on this forum, because other people have ideas and points of view that I would never think of on my own. Unfortunately, most of the time the response is either absent or simply saying that what I'm talking about is a non-problem.

So I would say that on average, if you want to have a good future, then take a realistically optimistic attitude to life. Think positively about what HAS been achieved, not what will. Think that most problems seem to have an unexpected solution. Acknowledge that some problems won't have a solution, but that's not usually the end of the world, and if it is, then you're dead anyway.

The point is, think more positively about the past, and more open to the unexpected about the future. That seems to work the best, at least, in my experiences.

I think a lot of transhumanists (and proponents of anything, really) get an idea of what the future will be like in their head, and then fit reality into this world view. Moore's Law and it's extrapolations are the inerrant doctrine of transhumanism, and whether one sees it as being heaven or hell, any opposing notions are seen as inconsequential and insignificant.

So I guess that I would say, to avoid the potential problems, think about all the wonders that ancient technology gave us, and modern technology. But be open to the future. Technology may be our future. Or, we may discover that it has too many problems, and a different way is better. Or, we may discover that some technology is good, but not a lot. We don't know. Stay open to all those possibilities. Don't bank on one. Just stay open.

Banking is precisely what I don't want to do. Ray Kurzweil loves making various predictions, and I think there are a lot of people who think technological progress is almost a natural and unavoidable force. These predictions only look at 'the curve' (Moore's Law) and don't seem to take the logistics of progress into account nor the ramifications of certain advances.
 

mr.cave

Redshirt
Local time
Today 8:05 AM
Joined
Sep 23, 2011
Messages
3
---
Why are we all so convinced that AI will have bad intentions for the human race?

What does AI stand to benefit from eradicating the human race rather than coexisting with it?

People seem to think that because people tend to do bad/stupid things, and because people will (probably) create AI, that AI will inherit our self-destructive tendencies.

The point of AI is that it can think for itself.
 

Cognisant

cackling in the trenches
Local time
Today 4:05 AM
Joined
Dec 12, 2009
Messages
11,155
---
Anyway when creating AI we will be able to set it's psychological drives and impulses which will be the foundation upon which its thoughts and behaviours will develop. So AI will only be a threat to us in one of two ways, the first being the possibility of some nation's military intentionally building a hyperintelligent sociopath, or that industrial AIs will simply out-compete us. That's what I'm most worried about, that in a business enviroment only the most efficient processes will survive and that practicially dictates that you replace as many humans as you possibly can.

Then what do you do with a world full of people that have no purpose?
Rather what will they do?
 

Cognisant

cackling in the trenches
Local time
Today 4:05 AM
Joined
Dec 12, 2009
Messages
11,155
---
I opened with "anyway" in the last post because I had written more before it, not because I was disregarding mr.cave's post.
 
Top Bottom