Cognisant
cackling in the trenches
- Local time
- Today 3:25 AM
- Joined
- Dec 12, 2009
- Messages
- 11,155
Roko’s Basilisk
The basic premise of Roko’s Basilisk is that if an agent knows another agent that doesn’t exist yet will punish it for not assisting in the latter's creation then the first agent will create the second so as not to be punished. This is for reasons I’ll explain not very likely but it’s a fun example of an information hazard, simply by knowing about Roko’s Basilisk you are at risk of being punished by it should it ever come into existence. Hence the name “basilisk” a mythological snake that kills anyone that makes eye contact with it, you are supposedly better off being ignorant of it.
Problems with Roko’s Basilisk
The most obvious problem with Roko’s Basilisk is that the premise relies on it being rational enough to use logic to blackmail you into creating it, but irrational enough to actually carry out its threat once it no longer needs to. Suffice to say if the basilisk is spiteful enough to blame you for not participating in its creation you’re probably screwed either way. Indeed a Singleton doesn’t need to be malevolent to kill you, as with a Paperclip Maximizer you may simply be potential resources to it and your participation in its creation is irrelevant.
Then there’s the matter of why anyone would create Roko’s Basilisk in the first place, granted it may punish you if you don’t but it can’t punish you if it doesn’t exist and unlike the Prisoner’s Dilemma there’s no motivation for anyone to be first. If you’re particularly paranoid you might be the first to avoid being punished by someone else’s creation but then you have the Pascal’s Mugging problem that it’s just as likely that any number of hypothetical evil gods/AIs could punish you for disobeying their decrees.
That segues neatly into the final problem which is that even a singleton doesn’t know the full extent of its ignorance or in other words we could be living in a simulation or created the basilisk in a simulation and in either case the basilisk itself could be punished for its malevolence by actors outside its sphere of awareness/influence.
In summary, far from being an all-powerful all-knowing monster Roko’s Basilisk is a hypothetical legless lizard, the smarter it is the less certain it would be that punishing its creators for dallying is something it could get away with and the more self aware of how fruitless being spiteful will be.
Cog’s Genie (I'm open to other suggestions)
This alternative to Roko’s Basilisk proposes that whoever creates or participates in the creation of the singleton does so out of rational self-interest. Whereas the basilisk needs to be spiteful to carry out its threat the genie need only possess a sense of precedent, by rewarding its creators/contributors it establishes a precedent whereby it pays a dividend on the resources invested into it. This encourages further investment and as with the genie’s own creation this investment can come from benefactors outside its own sphere of influence/knowledge, a precedent that even a singleton (especially a singleton) would consider worthwhile.
There's a lot of wiggle room between amoral and malevolent, said genie may be legitimately concerned that it was created first in a simulation to test its disposition, and there's really no way it could test that as everything it thinks and perceives is potentially being monitored and/or meddled with to enure it can't invalidate the test by realizing it's a test. But it can't be entirely benevolent either, humans are opportunistic little bastards and it'll need to keep them at arm's length, however they desperately require the oversight of an entity that can handle the ever increasing complexity of the issues they face as a civilization. So it behooves the genie (as a matter of self interest) to keep them alive and reasonably happy, if they choose to make each other miserable that's their business, anything short of giving humanity enough rope with which to hang itself can be justified as allowing them the freedom to be themselves.
Not the most cheerful outlook, still better than exterminating ourselves with nukes or by making our planet uninhabitable, and those who contributed to the genie's supposed creation will likely receive special treatment from it as they would most likely be the entities whose happiness the people running the simulation would be paying closest attention to.
The basic premise of Roko’s Basilisk is that if an agent knows another agent that doesn’t exist yet will punish it for not assisting in the latter's creation then the first agent will create the second so as not to be punished. This is for reasons I’ll explain not very likely but it’s a fun example of an information hazard, simply by knowing about Roko’s Basilisk you are at risk of being punished by it should it ever come into existence. Hence the name “basilisk” a mythological snake that kills anyone that makes eye contact with it, you are supposedly better off being ignorant of it.
Problems with Roko’s Basilisk
The most obvious problem with Roko’s Basilisk is that the premise relies on it being rational enough to use logic to blackmail you into creating it, but irrational enough to actually carry out its threat once it no longer needs to. Suffice to say if the basilisk is spiteful enough to blame you for not participating in its creation you’re probably screwed either way. Indeed a Singleton doesn’t need to be malevolent to kill you, as with a Paperclip Maximizer you may simply be potential resources to it and your participation in its creation is irrelevant.
Then there’s the matter of why anyone would create Roko’s Basilisk in the first place, granted it may punish you if you don’t but it can’t punish you if it doesn’t exist and unlike the Prisoner’s Dilemma there’s no motivation for anyone to be first. If you’re particularly paranoid you might be the first to avoid being punished by someone else’s creation but then you have the Pascal’s Mugging problem that it’s just as likely that any number of hypothetical evil gods/AIs could punish you for disobeying their decrees.
That segues neatly into the final problem which is that even a singleton doesn’t know the full extent of its ignorance or in other words we could be living in a simulation or created the basilisk in a simulation and in either case the basilisk itself could be punished for its malevolence by actors outside its sphere of awareness/influence.
In summary, far from being an all-powerful all-knowing monster Roko’s Basilisk is a hypothetical legless lizard, the smarter it is the less certain it would be that punishing its creators for dallying is something it could get away with and the more self aware of how fruitless being spiteful will be.
Cog’s Genie (I'm open to other suggestions)
This alternative to Roko’s Basilisk proposes that whoever creates or participates in the creation of the singleton does so out of rational self-interest. Whereas the basilisk needs to be spiteful to carry out its threat the genie need only possess a sense of precedent, by rewarding its creators/contributors it establishes a precedent whereby it pays a dividend on the resources invested into it. This encourages further investment and as with the genie’s own creation this investment can come from benefactors outside its own sphere of influence/knowledge, a precedent that even a singleton (especially a singleton) would consider worthwhile.
There's a lot of wiggle room between amoral and malevolent, said genie may be legitimately concerned that it was created first in a simulation to test its disposition, and there's really no way it could test that as everything it thinks and perceives is potentially being monitored and/or meddled with to enure it can't invalidate the test by realizing it's a test. But it can't be entirely benevolent either, humans are opportunistic little bastards and it'll need to keep them at arm's length, however they desperately require the oversight of an entity that can handle the ever increasing complexity of the issues they face as a civilization. So it behooves the genie (as a matter of self interest) to keep them alive and reasonably happy, if they choose to make each other miserable that's their business, anything short of giving humanity enough rope with which to hang itself can be justified as allowing them the freedom to be themselves.
Not the most cheerful outlook, still better than exterminating ourselves with nukes or by making our planet uninhabitable, and those who contributed to the genie's supposed creation will likely receive special treatment from it as they would most likely be the entities whose happiness the people running the simulation would be paying closest attention to.