Agent Intellect
Absurd Anti-hero.
I've been reading a book called "Radical Evolution" that lays out three scenarios about what may happen when the 'curve' of technological progress approaches the singularity. The first scenario is called Heaven, which has been discussed several other times on this forum. The second is called Hell, which is a popular theme in movies.
(I haven't gotten to the third scenario yet).
Anyway, I'm wondering if any other futurist or transhumanist enthusiasts have had any ideas for averting the Hell scenario. A quick rundown, using the form of GRIN technology (Genetic, Robotic, Intelligence, Nanotechnology) or as Kurzweil put it, GNR (Genetic, Nanotech, Robotics) since Robotics and Intelligence are basically the same thing:
Genetics:
The creation of a biological weapon that could potentially wipe out a large fraction of the human race (if not all of them). The book mentions a virus in Australia that infects mice, in which adding a single gene was able to make it 100% lethal, even to mice that had been vaccinated. Essentially, it would be easy for anyone with a decent level of knowledge in molecular biology to create a pathogen that can wreak untold havoc.
Nanotech:
Read about grey goo. Even if someone couldn't create their own self-replicating nanotech, it could be possible to infect existing self-replicating nanotech with viruses that cause them to replicate uncontrollably. Even if a grey goo scenario wasn't the case, if humans have nanotech within their bodies (for medical benefits), any malfunction (or infection with a computer virus) could cause these nanorobots to harm the host.
Robotics/Intelligence:
Watch movies like The Terminator or The Matrix or I Robot (or read the Ted Kaczynski manifesto). Essentially, the technology we create ends up owning us. Even if the strong AI wasn't violent, we might end up living in reservations, nothing more than pets for our robotic overlords.
The first two scenarios depend on something that we have an entire history full of empirical evidence to support: human nature. Let's face it, there are plenty of people who would either cause these scenario's to occur out of stupidity/ignorance, out of malice and hatred, or because they think they're doing the world a favor. The third scenario is a lot more up in the air, as it's essentially borne of our ignorance of what the machines will be like.
Another problem is that, as most of you probably know, technology does not always work the way we want it to (even 'tried and true' technology), and new, untested technology can have unforeseen consequences beyond even what is predicted in the Hell scenario.
So, the problem is, if a transhuman future is inevitable, or preferable, what could be done to prevent scenarios like this? Both optimists (Ray Kurzweil) and pessimists (Bill Joy) think that a singularity-like future is inevitable. While halting all progress, or even those fields most likely to be susceptible to abuse, would fix all of these, I don't think it's a viable option - you could never stop everyone from pursuing these advances.
Off the top of my head, here are a few things that could get ideas rolling:
1. Colonize space so that there are "backup" places to live in the case of the genetic/nanotech scenarios (not really a prevention, I guess).
2. More "aggressively" spread education to developing and third world nations (as well as the impoverished in developed nations). Empirically speaking, more educated people are usually not as violent as uneducated people.
3. Checks and balances: each part of GNR has control over the other two parts (as well as humans having control - a kill switch - over all three). For example, non-pathogenic bacteria that feed on nanotech could be introduced (and vice versa) as well as giving multiple AI's access to the nanotech network with their own kill-switches.
4. World tolerance. World peace is an unfeasible pipe-dream, but if there was a way to get nations to tolerate each other, it would at least prevent governments from mutually assured destruction. How this could be done is very tricky, but I think a world-wide free market commerce would be a place to start - who would want to kill off potential 'customers'?
(I haven't gotten to the third scenario yet).
Anyway, I'm wondering if any other futurist or transhumanist enthusiasts have had any ideas for averting the Hell scenario. A quick rundown, using the form of GRIN technology (Genetic, Robotic, Intelligence, Nanotechnology) or as Kurzweil put it, GNR (Genetic, Nanotech, Robotics) since Robotics and Intelligence are basically the same thing:
Genetics:
The creation of a biological weapon that could potentially wipe out a large fraction of the human race (if not all of them). The book mentions a virus in Australia that infects mice, in which adding a single gene was able to make it 100% lethal, even to mice that had been vaccinated. Essentially, it would be easy for anyone with a decent level of knowledge in molecular biology to create a pathogen that can wreak untold havoc.
Nanotech:
Read about grey goo. Even if someone couldn't create their own self-replicating nanotech, it could be possible to infect existing self-replicating nanotech with viruses that cause them to replicate uncontrollably. Even if a grey goo scenario wasn't the case, if humans have nanotech within their bodies (for medical benefits), any malfunction (or infection with a computer virus) could cause these nanorobots to harm the host.
Robotics/Intelligence:
Watch movies like The Terminator or The Matrix or I Robot (or read the Ted Kaczynski manifesto). Essentially, the technology we create ends up owning us. Even if the strong AI wasn't violent, we might end up living in reservations, nothing more than pets for our robotic overlords.
The first two scenarios depend on something that we have an entire history full of empirical evidence to support: human nature. Let's face it, there are plenty of people who would either cause these scenario's to occur out of stupidity/ignorance, out of malice and hatred, or because they think they're doing the world a favor. The third scenario is a lot more up in the air, as it's essentially borne of our ignorance of what the machines will be like.
Another problem is that, as most of you probably know, technology does not always work the way we want it to (even 'tried and true' technology), and new, untested technology can have unforeseen consequences beyond even what is predicted in the Hell scenario.
So, the problem is, if a transhuman future is inevitable, or preferable, what could be done to prevent scenarios like this? Both optimists (Ray Kurzweil) and pessimists (Bill Joy) think that a singularity-like future is inevitable. While halting all progress, or even those fields most likely to be susceptible to abuse, would fix all of these, I don't think it's a viable option - you could never stop everyone from pursuing these advances.
Off the top of my head, here are a few things that could get ideas rolling:
1. Colonize space so that there are "backup" places to live in the case of the genetic/nanotech scenarios (not really a prevention, I guess).
2. More "aggressively" spread education to developing and third world nations (as well as the impoverished in developed nations). Empirically speaking, more educated people are usually not as violent as uneducated people.
3. Checks and balances: each part of GNR has control over the other two parts (as well as humans having control - a kill switch - over all three). For example, non-pathogenic bacteria that feed on nanotech could be introduced (and vice versa) as well as giving multiple AI's access to the nanotech network with their own kill-switches.
4. World tolerance. World peace is an unfeasible pipe-dream, but if there was a way to get nations to tolerate each other, it would at least prevent governments from mutually assured destruction. How this could be done is very tricky, but I think a world-wide free market commerce would be a place to start - who would want to kill off potential 'customers'?