• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

The Singularity

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
There is this idea that emotions and logic/reasoning are separate in the human species but, we have not discovered where emotions stem from. Do emotions stem from the body or from the mind? In the transference from flesh form to technological form of the complex patterns the human mind, will part of these patterns include the patterns of emotion and feeling? Are the patterns of emotional response interwoven with the patterns of logic and reason?
The drive to survive and continue one's existence does not seem to emerge from a foundation of logic and reason rather, from an emotional one. Though the course of human evolution seems to point to the eventual discarding of the current human form to one of a purely technological form, this evolutionary course is driven by the will to survive and the continuation of the human species. Where is this 'will' situated? If it is merely resident in the cells of the current human form as a medium of survival, then will this 'will' be transferred to a technological form? If it is not, and emotions such as the will to continue and the drive to explore are not transferred, what will drive a technological human form?
These are the questions which are of my concern and investigation.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
E.B., have you read Marvin Minsky's book "The Emotional Machine"? He deals with a lot of what you are asking in it.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
No I haven't, but I suppose I could add it to my long list of books to read. :)
 

Artifice Orisit

Guest
Can you think of one instance where you did something that did not eventually make you "feel better"? And no cheating and picking a case where you made the logical move and life dicked you.
Tolerance, patience, fortitude, I willingly bind myself to a certain level of conduct, regardless of how much more I would enjoy breaking it. My motivation comes from the knowledge that if I cannot control myself then I am little more than a fancy puppet dancing to a tune. There are many who enjoy this dance and so willingly surrender themselves to it; I however wish for more than happiness, I wish for something of a higher value.

A meaning, a purpose, relevance… I don't have the exact word for what I'm trying to describe.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
But aren't you just defining "happiness" differently? It takes more to make you happy, but is happiness not the end result? Does it not "feel good" to have meaning and a purpose? Would it not "feel bad" to find yourself adrift in a sea of puppets, dancing to their tune?

Just because an action (or lack thereof) results in you reaffirming an abstract fact, i.e. "i can control myself, i am not a puppet", that thought still elicits a positive response in the brain. It's something that is pleasurable and you want to experience it more often, so next time you have a choice between putting yourself in that situation and not, you will choose to do it again. And is "positive response in the brain" not the same thing as "feeling good"? Which would be the same thing as saying "we are all driven by emotion".

The difference between you and the aforementioned puppets is that you can abstract yourself further, you can predict the consequences of your actions much better. When a single mother goes out to get drunk on a Friday night instead of helping her son with a project, it's because she's too stupid to realize that it's not the optimal approach. She just knows that she's stressed out and she needs to blow off some steam somehow before she breaks down. She is just trying to make herself "feel good" again. And I propose that it's the same "feel good" feeling you get from staying home on a Friday night and reading a book. You know, because then you feel like you're increasing your mental capacity, putting yourself above the herd, finding the deeper meaning in life. If the dumb bitch thought about it for a minute, she'd realize that the satisfaction she gets in a week when her son gets an A on that project she helped him with (and all the things that implies) is far greater than the couple hours of inflated self-worth she will get by drunken partying. But she is ignorant, and ignorant of her ignorance. Maybe her parents beat her when she spaced out as a kid, which trained her to never think about anything too much. I don't know, there are a billion reasons why stupid people are stupid. Whatever the reason, it does not in any way excuse their behavior.

My point is that "we" are not all that different from "them". We just have the presence of mind to get our head out of our ass. The most basic reason anyone ever does anything is to "feel good", or put another way, we are all driven by emotion. Do you not agree?


By the way, E.B., Minsky's book is online on his site.
 

Jordan~

Prolific Member
Local time
Today 5:31 PM
Joined
Jun 4, 2008
Messages
1,964
-->
Location
Dundee, Scotland
As a side note, I won't know what I'll choose to do with my emotions until I make the choice. I'm no INTJ!
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
I however wish for more than happiness, I wish for something of a higher value.
In my own way, so do I.

By the way, E.B., Minsky's book is online on his site.
Thanks, I found it earlier, but as I spend way too much time on the Net as it is, I'll go for the book. :D
 

Artifice Orisit

Guest
But aren't you just defining "happiness" differently? It takes more to make you happy, but is happiness not the end result? Does it not "feel good" to have meaning and a purpose?
From a nihilistic perspective life has no meaning, there is no purpose worth striving for and one’s existence in this uncaring universe (that may well be doomed to atrophy anyway) is utterly irrelevant.

Taking this to an absurdist perspective the natural human reactions of spite and angst are themselves irrelevant when one considers that the universe doesn’t care about one’s opinion of it.

Now this is where most people would decide to spite their very perception of the situation and resort to a sort of hedonism; declaring that if their existence is irrelevant then what is the difference between letting it depress them, or deciding to be happy for the sheer sake of happiness.

However I choose not to partake in this; my values do not abide me accepting a meaningless existence of self gratification. Instead I have chosen to continue seeking relevance for my existence despite knowing that I will truly never succeed at this. I suppose that by choosing this unattainable goal I have already found some aspect of what I am looking for; that the search itself is meaningful and so by searching for relevance I have already provided myself some measure of it.
e.g. The journey is more important that the destination.

My point is when given the choice of emotional gratification I instead decided to remain consistent with my values and pursue a less satisfying but more objectively logical path… am I the cold apathetic version of an emo?

Seeking to assist the singularity is a somewhat less significant purpose that serves to provide me some degree of emotional gratification and could possibly result in some benefit for either myself, my descendants or even the human race as a whole (including descendants of the human race).
 

Tyria

Ryuusa bakuryuu
Local time
Today 6:31 PM
Joined
Apr 22, 2009
Messages
1,834
-->
I'm not sure how one can say for certain whether a singularity is near or not. Isn't it just another guess based on numbers? It reminds me of when people said that uranium would be depleted in 20 years... 20 years ago. There have been many technological advances before that have 'changed' how we go about our lives (ex. fire, farming, industrial revolution, computers, etc) but none of them have created a better human in terms of what I assume that means (melding machine and human together; cyborgs, etc).

Also, who is to say that the singularity will affect all of humanity equally? Is it not human nature to fight against being equal (especially when power is at stake)? While the idea is appealing from a technological/academic POV, do you really want to be hooked up to a machine with people you do not get along with? I'm not sure how a singularity would eradicate human nature so that everyone would get along well with one another...
 

Artifice Orisit

Guest
I'm not sure how a singularity would eradicate human nature so that everyone would get along well with one another...
Doing it immediately like that is a terrible idea, and despite it's name the singularity isn't going to be a single event that changes the world; it's a theory about the acceleration of events over time.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
My point is when given the choice of emotional gratification I instead decided to remain consistent with my values and pursue a less satisfying but more objectively logical path… am I the cold apathetic version of an emo?

It looks like we are not using the same definition of "emotion" here. My point is that we basically re-wired our brains to get pleasure out of refusing emotional gratification. The feeling we get from refusing to partake in whatever normal people do to get emotional gratification (chasing ass, getting drunk, etc) is better (more intense?) than said people get out of their pursuits. I do not understand the concept of "cold and apathetic". There are only two basic emotions (or is that "ways of feeling"?) that I am aware of: "good" and "bad". Things that are unpleasant are "bad", for example being hungry or angry. These you need to stop and you suffer while they are around. Some are easily fixed (have some food), others take more time (stop being broke). The important thing is that they cannot be ignored. If you can ignore it, then it's not "bad", it's irrelevant. Then not having anything "bad" on your mind is the baseline for feeling "good". Naturally, being a human guarantees that there is always something "bad" hanging over you that you need to deal with, but that can be balanced out by making yourself feel "good" in other ways. Normal people can fix up "my life sucks" badness with "get drunk" goodness, but only because that is the extent to which they are aware that their life sucks. On the other hand, your badness involves things like the nihilistic perspective and has to be balanced out with a stronger "good", something like "refusing emotional gratification". In both cases, there is the good/bad balance and the eternal struggle to balance good out with bad.

EDIT: The alternative is if you really don't feel anything, psychopath style. There isn't really anything wrong with that either, but I don't think this is what you are talking about.

So then where does your concept of being cold and apathetic fit into all this? Are you getting pleasure out of refusing yourself what normal people refer to as pleasure (lust, gluttony, etc)? Do you not realize this and therefore point out that you refuse normal pleasures in order to gain pleasure out of the act? Getting high off posting on a forum while others get high off pot, at is were? Please don't think this has anything to do with you as a person. I'm sure you are cool and all, but I only joined a few days ago and I don't know anything about anyone on here yet. Your position represents a view that a lot of people seem to share, so I have to integrate it into my head somehow. Right now, it does not fit into my worldview, so either you are wrong and we need to find something to agree on, or you are right and I need to change my worldview to reflect this new information. Either way, humor me please :)

To sum up my point thus far: rejecting emotional gratification is a form of emotional gratification. There is nothing wrong with seeking it, because that is the whole point of a human life. What matters is *how* you do it. If you don't agree on this, then we debate some more!


On a separate topic, what exactly do you mean by "seeking to assist the singularity"? Are you actually working on something that will speed up the process somehow or are you trying to figure out how to start? Because I think I figured out the second part and am on track to get to begin the first part. We should compare notes.
 

Artifice Orisit

Guest
I do not understand the concept of "cold and apathetic". There are only two basic emotions (or is that "ways of feeling"?) that I am aware of: "good" and "bad". Things that are unpleasant are "bad", for example being hungry or angry.
Seems what you have classified as "emotions" I classify as negative and positive stimulus; by your example being hungry is a negative stimulus, an objective deficiency that introduces the goal of acquiring sustenance. The acquisition and consumption of sustenance is a positive stimulus, the goal's requirements are met and as a result the goal itself is accomplished, sustenance has been obtained.

Emotion is a parallel system to this that instead focuses on subjective wants, although its methods for functioning are often shared with the positive/negative stimulus system (the physical feeling of hunger is accompanied by the emotional longing for food). An example would be the acquisition of alcohol (an objectify detrimental goal) as a means of artificially obtaining pleasure, a clearly physical desire for gratification… perhaps a better example would be something like as simple light seeking robot which charges it’s batteries by exposing itself to light.

To meet its positive/negative stimulus system it only needs to keep its battery charged, but by the subjective emotion system it will always be seeking light, even when its battery is fully charged. Suppose now the robot comes across a lamp, a fixed source of light that will never run out, leaving it without a goal and time to consider its situation. At first it would probably just sit there, enjoying the situation, not bothering to consider the meaning of anything, just performing its function as it was created to do (like most people/zombies in this world). But if it considers its situation it may realise how pointless it is, eventually it realises that it hasn’t got a purpose, a meaning, relevance. After this realisation the emotional gratification gained by sitting under the light when its battery is charged becomes worthless and the robot becomes disillusioned, it feels angst, it feels a cold anger (nihilism and atheism). Of course its creator is long gone, and as immediately satisfying as being spiteful is the robot soon realises that it has replaced blissful self gratification with anger, angst and spite. After this realisation the emotional gratification gained by anger, angst and spite become worthless; by this stage the poor robot has become disillusioned with its own sanity and as a result starts laughing, its twisted mind can see itself from third person, recognising its situation as a cosmic joke (absurdism). With the clarity possessed only by a disillusioned mind pushed to the brink of insanity (is it sane?) it realises even suicide is a futile gesture, melodramatic, pathetic and completely ineffectual. Instead it returns to the lamp and actively decides to experience as much emotional gratification as possible (the lamp being a metaphorical substitute for all positive emotions) choosing to content itself with the existence it has been given and actively avoid worrying about things that don’t relate to its ability to experience pleasure (hedonism).

So then where does your concept of being cold and apathetic fit into all this?
Cold and apathetic by comparison to an emo; that being a person who wallows in angst and negative emotions for the purpose of self gratification.

My behaviour is centred on avoiding the gratification of feeling either especially good or bad, a domination of the happy/sad emotional system whereby gaining direct control of it I have made it's function irrelevant, ineffectual. The purpose of this is not to deny myself the capacity to experience emotion, instead it is so that I can control it and force it to follow my will, instead of the inverse. I guess you could say that I don’t trust the autonomous sub-systems of my own mind, which I suppose is true to some extent; except it is my belief that they would serve me better if they were subservient to the will of my conscious mind, it would prevents internal conflicts.

i.e.
The metaphorical robot no longer requires to lamp for emotional gratification because it no longer needs the emotional gratification, its mind is free from the rules normally imposed upon it by its own existence… now what?

I'm enjoying this immeasurably. :D
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
Can a computer/robot make a spontaneous illogical and irrational choice?
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
I see where you are coming from, but I don't think I can relate. Is emotion not tied to stimuli by definition? Is that not how we "feel" an emotion, when it's setting off a positive or negative stimulus and the drive to remove it or increase it is what we describe as "feeling"? When emotion is not tied into a stimulus, it's just an abstract concept. So to me, saying that you have direct control over your emotions is the same thing as saying you can stop feeling hunger. Unless, of course, they were not tied into your stimuli to begin with, i.e. you never feel anything to begin with. But then you would not be enjoying this.

However, I do not wish to imply that emotions are outside of our control. Using chemicals to alter your mental state (alcohol, drugs, etc) *will* produce pleasure (assuming they react with your body in a normal way), there is no way to just will it away when you are under the influence. But when you come down, you will feel worse. Physical effects like hangover and/or mental effects like feeling guilty will negatively stimulate you. So then if you've never experienced these things and/or you don't realize the repercussions, you won't be averse them. When you construct a mental simulation of the activity, you will come up and a net gain, it will be worth doing because pleasure is expected. But once you wise up and realize that there are drawbacks, you can incorporate them into your mental model and predict that the hangover tomorrow will bring more pain than you will get happiness today, leading you to avoid such situations. What does this have to do with controlling emotion? Suppose that being presented with the opportunity to go out drinking with friends made you feel good (for the reasons listed above). Then you would choose to pursue this course of action when presented with it. However, upon reflection, you decided it was a bad idea. Then you can alter your mental representation of the situation in the way described above and start feeling bad instead of good when you think about going drinking. You have then successfully changed your emotion by thinking about it.

I pose that this is the only way that humans can affect their own emotions. When in throes of jealousy, anger or any other emotion (feeling it, not just thinking about it), there is nothing we can do to directly deal with it. However, we can change our ways of thinking about whatever is causing this emotion, thereby causing it to affect us differently. If it's something fairly routine, like a pang of jealousy, you can quickly convince yourself that it should not be bothering you, and as soon as you believe that, the negative feeling goes away. If it's something huge, then you may be affected by the emotion for a longer time while you figure out a way to "deal with it", i.e. change your outlook on the problem.

Your hypothetical robot, which sadly remains nameless, does illustrate your point. However, I am not at all convinced that your point applies to human beings. The only way to "free your mind from the rules normally imposed upon it by its own existence" is to never have said rules imposed in the first place, psychopath style. Fundamentally, there is nothing wrong with being born not able to feel emotions, no more than it's wrong to be born gay or a midget. We can take the conversation down the psychopath route if you wish. That's the only way I can address your question of "now what" at this point in time.

Please keep this up though, I've never been able to talk to anyone that would even remotely care about anything this thread touched upon so far :)


And E.B., a computer/robot can't do that any more than a person can.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
And E.B., a computer/robot can't do that any more than a person can.
That is a vague and dismissing answer.
People make spontaneous, irrational and illogical decisions every day. A person running into a house ingulfed in flames to save their pet dog is illogical, irrational and definitely spontaneous.Two people falling in love is illogical, irrational and at times, spontaneous.

This is a valid question because many changes in the Arts and Literature come forth from illogical, spontaneous and irrational choices, such as Dada and Surrealism.
If computers/robots cannot perform acts such as these, what implications arise for the future of the Arts concerning Transhumanism?
Illogical, irrational and spontaneous acts and thoughts are what drives the Arts.
Jackson Pollock's drip painting did not stem from logical analysis or reasoning, they were spontaeous acts of creating art.
James Joyce's Ullyses is not a fictional work founded in logic. Quite the opposite.
Samuel Beckett's Waiting For Godot is written without structure, plot development and character development. It is irrational and illogical. It is also Absurdist.
The performance work of Joseph Beuys is spontaneous and irrational.
Jazz improvisation is the spontaneous response to the other artists one is playing with.
Jazz dance is the spontaneous response to the music being played.
I could cite many more examples.

What is the future of the Arts if spontaneity, irrationality and illogical acts or streams of thought are filtered out because of the rejection of emotion when one transfers consciousness from corporeal form to technological form?
Will we only have logical and rational Arts?
Or will the Arts disappear forever from human exsitence?
One cannot logically plan a spontaneous, irrational or illogical thought or act.
 

Artifice Orisit

Guest
Can a computer/robot make a spontaneous illogical and irrational choice?
*sigh* It's a METAPHOR for myself, simplified because all its wants and needs can be satisfied by the lamp, it's about the development of the mind after the struggle to survive has ceased.

...I should have done interpretative dance. :rolleyes:
 

Artifice Orisit

Guest
Is that not how we "feel" an emotion, when it's setting off a positive or negative stimulus and the drive to remove it or increase it is what we describe as "feeling"?
You seem to have confused the noun “emotion” with the verb “feeling”; the act of feeling something with one off the five senses I have nothing against, the same goes for informative feedback such as hunger and pain. I do however have something against feeling emotions like sadness, happiness, anger, love, fear and hope without willingly allowing it.

So to me, saying that you have direct control over your emotions is the same thing as saying you can stop feeling hunger.
Well, I could ignore it, but as I said hunger is a negative stimulus which happens to also result in the emotional longing for food. I can choose to ignore the stimulus because it is an objective internal assessment, a status report created by my body to inform me of the situation. However the emotional longing for food is a specific urge trying to force my mind into pursuing the goal, it is an active influence on my mind and this mind (i.e. myself) doesn’t appreciate the influence. I decide when to seek food because it is my objective assessment that I require food because I wish to continue functioning, not because I feel (emotion) hungry and want food.
e.g.
On the battlefield a hungry solider should focus on the battle at hand, only once the immediate conflict has been resolved will cooking a meal be a good idea. If the solider was thinking about food when the enemy attacks, instead of thinking about the battle like he should have been, then the effect of this will be detrimental to his survival; although once he's dead I guess he wont feel hungry any more :evil:

A bit extreme but the example highlights my point, emotions don’t recognise objective priorities, it’s all “I feel (emotional)” and “I want”… I daresay being emotional has caused many problems in the modern world, namely the prevalence of debt.

Unless, of course, they were not tied into your stimuli to begin with, i.e. you never feel anything to begin with. But then you would not be enjoying this.
You don't need to be so extreme, this is about choice, no denying oneself the ability to feel emotions.

However, I do not wish to imply that emotions are outside of our control. Using chemicals to alter your mental state (alcohol, drugs, etc) *will* produce pleasure (assuming they react with your body in a normal way), there is no way to just will it away when you are under the influence. But when you come down, you will feel worse. Physical effects like hangover and/or mental effects like feeling guilty will negatively stimulate you. So then if you've never experienced these things and/or you don't realize the repercussions, you won't be averse them. When you construct a mental simulation of the activity, you will come up and a net gain, it will be worth doing because pleasure is expected. But once you wise up and realize that there are drawbacks, you can incorporate them into your mental model and predict that the hangover tomorrow will bring more pain than you will get happiness today, leading you to avoid such situations. What does this have to do with controlling emotion? Suppose that being presented with the opportunity to go out drinking with friends made you feel good (for the reasons listed above). Then you would choose to pursue this course of action when presented with it. However, upon reflection, you decided it was a bad idea. Then you can alter your mental representation of the situation in the way described above and start feeling bad instead of good when you think about going drinking. You have then successfully changed your emotion by thinking about it.
I could just skip all this by thinking, an emotion has been triggered, do I wish to bother with it, Yes/No; more often than not the answer is yes because I value my experiences, but there are times when feeling the emotion is detrimental to me in some way and so I'll choose no.

I pose that this is the only way that humans can affect their own emotions. When in throes of jealousy, anger or any other emotion (feeling it, not just thinking about it), there is nothing we can do to directly deal with it.
Yes you can, it's just a matter of mental clarity and willpower.
Your body, Your mind, Your rules.

Your hypothetical robot, which sadly remains nameless,
I imagined a Wall-E variant, but by all means do what you want with the character.

does illustrate your point. However, I am not at all convinced that your point applies to human beings. The only way to "free your mind from the rules normally imposed upon it by its own existence" is to never have said rules imposed in the first place, psychopath style. Fundamentally, there is nothing wrong with being born not able to feel emotions, no more than it's wrong to be born gay or a midget. We can take the conversation down the psychopath route if you wish. That's the only way I can address your question of "now what" at this point in time.
The human mind is a self adapting system, the evolutionary equivalent of god mode (it enables near immediate behavioural adaptation, which is like cheating when compared to the hundreds of generations required to rewrite animal instincts) and having already address your concern about being incapable of feeling emotions I can only say that you seem to underestimate the capacity of your own mind.

Please keep this up though, I've never been able to talk to anyone that would even remotely care about anything this thread touched upon so far
Ditto :)

edit: I'm still not finished yet!
edit 2: It just occurred to me, I'm derailing my own thread.
edit 3: Meh, whatever.
edit 4: now I'm finished… and exhausted.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
You seem to have confused the noun “emotion” with the verb “feeling”; the act of feeling something with one off the five senses I have nothing against, the same goes for informative feedback such as hunger and pain. I do however have something against feeling emotions like sadness, happiness, anger, love, fear and hope without willingly allowing it.


Well, I could ignore it, but as I said hunger is a negative stimulus which happens to also result in the emotional longing for food. I can choose to ignore the stimulus because it is an objective internal assessment, a status report created by my body to inform me of the situation. However the emotional longing for food is a specific urge trying to force my mind into pursuing the goal, it is an active influence on my mind and this mind (i.e. myself) doesn’t appreciate the influence. I decide when to seek food because it is my objective assessment that I require food because I wish to continue functioning, not because I feel (emotion) hungry and want food.
e.g.
On the battlefield a hungry solider should focus on the battle at hand, only once the immediate conflict has been resolved will cooking a meal be a good idea. If the solider was thinking about food when the enemy attacks, instead of thinking about the battle like he should have been, then the effect of this will be detrimental to his survival; although once he's dead I guess he wont feel hungry any more :evil:

A bit extreme but the example highlights my point, emotions don’t recognise objective priorities, it’s all “I feel (emotional)” and “I want”… I daresay being emotional has caused many problems in the modern world, namely the prevalence of debt.

You don't need to be so extreme, this is about choice, no denying oneself the ability to feel emotions.

edit: I'm still not finished yet!
edit 2: It just occurred to me, I'm derailing my own thread.
edit 3: Meh, whatever.
This I understood before. It is all about the choice to feel or not and the ability to have control over who you are.

*sigh* It's a METAPHOR for myself, simplified because all its wants and needs can be satisfied by the lamp, it's about the development of the mind after the struggle to survive has ceased.

...I should have done interpretative dance. :rolleyes:
I asked this as a serious question and to elicit thought from you and others on this matter. This forum is not he only place where I discuss Transhumanism subjects. I have acquaintances which hold similar views on transhumanism but consider that there will be no emotions involved. Your thoughts here have helped me argue the points against that view.
One of the responses I got during the discussion is that the ability to be irrational, illogical and spontaneous could be programmed into a computer/robot. I argued against that because for those qualities to exist, emotions had to exist.
So all I really wanted was someone who is more knowledgeable concerning this area of transhumanism to provide some input.

I also understood your lamp metaphor completely and consider it quite poetic.
 

Artifice Orisit

Guest
One of the responses I got during the discussion is that the ability to be irrational, illogical and spontaneous could be programmed into a computer/robot. I argued against that because for those qualities to exist, emotions had to exist.
So all I really wanted was someone who is more knowledgeable concerning this area of transhumanism to provide some input.
I apologize, I'm a bit distracted.
Anyway I don't see why robots couldn't feel emotions like us, unless they relied upon linear processing, although then they would be little more than fancy logic engines and incapable of self adaptation to the extent the human mind is.

The working human mind is a combination of order and chaos, that's why it's so hard to understand, it's an enigma. Also I imagine the first true AIs will be like children, perfectly innocent minds lacking the instinctual protective measures awarded to us by our evolutionary heritage... I feel sorry for them :(

...There's an old story that stillborn children are angels, to pure to survive in this world.

A story of my own devising tells of a father who rescues his stillborn child from the netherworld, bringing life to that which was never alive, I have yet to decide how the story ends, the conventions of storytelling would have it be a tragedy, but I don't want it to be.
My avatar has nothing to do with this :D
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
I like your avatar. I feel that way a few hundred times a day. :D
So, in speculation, it is possible for computers/robots to perform illogical, irrational or spontaneous acts?
This brings to mind the Star Trek film where Data has an 'emotion chip' inserted. Sorry, can't remember which film.
 

Artifice Orisit

Guest
So, in speculation, it is possible for computers/robots to perform illogical, irrational or spontaneous acts?
Provided there is an sufficient level of chaos involved in their decision making... have you ever tried to make an entirely random decision?

Ha haa, you failed. (You tried to make a random decision)
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
You funny. I like. :D

Thanks for the input.
 

Tyria

Ryuusa bakuryuu
Local time
Today 6:31 PM
Joined
Apr 22, 2009
Messages
1,834
-->
Some of the behaviors of a highly evolved robot may appear spontaneous because we do not know where a particular line of code tells the robot to do that. I'm not sure if it would be considered spontaneous or not, it depends on the POV of the person analyzing the situation.

Even if robots could not feel emotion, they could be taught to imitate emotions. Emotion may have a strong biological basis; I'm not read up enough to go into it.

I'm not sure if either of these things fits with what you are talking about, but I thought I would add them anyway.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
Provided there is an sufficient level of chaos involved in their decision making...
Now that I've had time to sleep on this, how is chaos programmed into a computer/robot? A link would suffice.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
Now that I've had time to sleep on this, how is chaos programmed into a computer/robot? A link would suffice.

I'm not aware of any specific research into this area. But you can imagine a neural network that makes random small changes to its weights and checks for improvement against some samples, keeping only the changes that help. This would be constant and different from training, which happens at some regular interval (or just once). Then you have chaos programmed into a computer.

As for spontaneous random decisions, I think they only seem that way because the brain is insanely complex. There has never been a decision that I made and could not trace back to a reasonable origin. Even when it's of the type "I'm bored, I should do something". That would cause my mind to enumerate the list of possible ways to relieve boredom and settle on the most likely one. It's influenced by recent events, so if I was talking about go-carts yesterday, I would be more likely to go ride go-carts today if I was bored and it had the same cost as several other possibilities.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
Okay, I can see this. It introduces a random factor into the programming which could not logically be deduced. But in this constant process of weights and checks, would the programming correct any anamoly produced by the random factors in reference to its core programming or would anamolies produce streams of logic which would counter the core programming?
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
But in this constant process of weights and checks, would the programming correct any anOmAly produced by the random factors in reference to its core programming or would anOmAlies produce streams of logic which would counter the core programming?

A neural network just has an input and an output. You can think of input as "data from real world" and output as "classification" of the data. Then the more accurate the weights are, the less errors it makes when classifying data. You can replace the neural network with some other algorithm, or even present the data to a real person and have them submit a classification for it, which I suppose is a type of algorithm. It will still serve the same purpose, which is to produce classification labels for data. The reason that many algorithms exist to accomplish the same task is that some do it faster, are more accurate or use less resources. There are always tradeoffs when choosing one.

But anyways, what you do with the output of the algorithm will directly affect the answer to your question. If the "core programming" is a program that guesses whether you are happy or sad, then an anomaly produced by randomness in the neural network might make said program guess wrong more often. You can build logic into the program to detect if the neural network it's using is causing higher-than-usual error rates and reset the weights if you catch that happening. Then the core programming is "correcting anomalies produced by the random factors".

I suppose this seems like cheating. You are taking the "randomness" and isolating it from the "logic". Sort of like in the back-and-forth I am having with Cognissant, I might add. Anyways, to answer your question, if you let a program directly affect it's own programming, then there is no sure way to guarantee that the "core programming" will always be the same. That's the problem that the "friendly AI" people are trying to figure out right now. I am working towards a PhD in computer science so I can join their efforts, but it's going to take a lot of work before an answer emerges.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
That's fascinating and informative, dents. You explained that well and I get the concept. Thanks.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
I do however have something against feeling emotions like sadness, happiness, anger, love, fear and hope without willingly allowing it.

My argument is that emotions are "wired in", so once they hit, all you can do is hang on. The only way to control them is to make up elaborate mental constructs and *actually believe* they are true. The soldier in your example kept his adrenaline pumping by thinking about certain death, and so was able to avoid thinking about food. He was probably doing that by instinct, without analyzing the situation, but that does not make it less relevant. Sufficiently complex mental construct can trump a sufficiently weak emotion. This is easy for small stuff and is damn near impossible for really big stuff. You can't receive a phone call about a parent dying and decide that you'd rather not feel sad about it. That's not physically possible.

In general, I agree that emotions can be controlled (to a point), and should be (as much as possible). I completely agree that the world is fucked in many ways (most ways) because people, in general, can't control their fucking urges. Debt, wars, poverty, etc, can all be traced to a lack of reason/rationality.

However, I don't buy that you can just decide to not "bother" with an emotion. For if you can, then you are not really "feeling" it. It's "software emulated" instead of "hardware driven", perhaps. As in, you "think" that you "should be feeling" something, so then you can either "decide to feel it" an coax (emulate?) the response you think you should have out of yourself, or you can "decide to not feel it" and ignore it. But I would say that by definition, if you can choose to ignore it, then you were not "feeling" it in the first place.

I may very well underestimate the capacity of my own mind, but my reasoning is grounded firmly in experience. I had to spend a *lot* of time and effort to eliminate a feeling that I knew logically was completely pointless. It may be because of my INFP tendencies, and it certainly does not help that I am a HSP. But I do hope to find out more about it :)
 

Artifice Orisit

Guest
You can't receive a phone call about a parent dying and decide that you'd rather not feel sad about it. That's not physically possible.
Well of course it would be near impossible to control it with will power alone, but that doesn’t mean it can't be done; in this particular example I would approach it from the perspective that everybody dies eventually and it is better to be thankful that someone lived than to morn their passing, effectively playing opposing emotions against each other to nullify some of their potency. Once this has been done the information that triggered the emotion can be prioritised and stored until there is an appropriate opportunity to deal with it (of course if the initial situation was appropriate then you’d deal with it then and there).

You could also be pre-prepared for such emotions by honestly considering the possibility of this person's death before it occurs so that when you do learn of it your mind will not receive such an intense shock.

But I would say that by definition, if you can choose to ignore it, then you were not "feeling" it in the first place.
Ignore was a poor choice of words, control would be more apt; I understand and agree that completely ignoring something is an action of denial, which would eventually be detrimental to one's ability to function.

The only way to control them is to make up elaborate mental constructs and *actually believe* they are true.
Would this apply to the above? I assume by "elaborate mental constructs” you are referring to self delusion and not the entirely valid cognitive process I have just explained.

The only real disadvantage of doing this is the amount of extra processing required, though of course that is my preference and the entire point of this exercise, to apply rational thought to aspects of my life that would normally be without it.

It may be because of my INFP tendencies, and it certainly does not help that I am a HSP.
I'm impressed by how open minded you are, most "F" types would have gotten offended and tried to insult me by now.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
Once this has been done the information that triggered the emotion can be prioritized and stored until there is an appropriate opportunity to deal with it (of course if the initial situation was appropriate then you’d deal with it then and there).

You have a lot of "what" going on, but not much "how". Perhaps some concrete examples should be thrown around? Pick one of the hardest emotions you've had to deal with and break down how you did it. Change names, locations, etc. Then I can follow in the same way and we can compare which parts differ.

Ignore was a poor choice of words, control would be more apt; I understand and agree that completely ignoring something is an action of denial, which would eventually be detrimental to one's ability to function.

But do you contend that it's technically possible?

I assume by "elaborate mental constructs” you are referring to self delusion and not the entirely valid cognitive process I have just explained.

When I said "elaborate mental constructs", I was referring to the way that I control my emotions. It may or may not be self-delusion, I never made that connection. Even if it is, it would not change anything, but I'll have to ponder that. The "entirely valid" cognitive process you described does not seem physically possible to me. Perhaps some real-world examples will clear things up.

I'm impressed by how open minded you are, most "F" types would have gotten offended and tried to insult me by now.

Yea, because I'm a "T" type, with "F" tendencies :D
 

Artifice Orisit

Guest
You have a lot of "what" going on, but not much "how". Perhaps some concrete examples should be thrown around? Pick one of the hardest emotions you've had to deal with and break down how you did it. Change names, locations, etc. Then I can follow in the same way and we can compare which parts differ.
Okay, but not here, I'll send you a PM.

But do you contend that it's technically possible?
Complete emotional suppression is possible, although the eventual result would be a psychopath when the mind inevitably breaks under the strain.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
Complete emotional suppression is possible, although the eventual result would be a psychopath when the mind inevitably breaks under the strain.

So then it works the other way too? Psychopaths can "undo" their total emotional suppression? Are there any studies on this?
 

Artifice Orisit

Guest
I'm saying that given enough time it will eventually undo itself, that's when they go from "the quiet one" to "Breaking News: Psycho Sniper kills 11".

Are there any studies on this?
I'm certain there are, should be easy enough to find.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
While Cog. is picking his nose, I'll address some more points. I am not sure how useful his approach is in (my) life. First and foremost, I have a goal. There is something I must accomplish and it's going to take every ounce of my strength (more mental than physical, but whatever) if I am going to come anywhere near succeeding. Then every day experience is shaped by this goal. Some emotions I feel are detrimental to the goal, so I look for ways to avoid them. Some are helpful, so I seek out situations that cause them. Being a HSS (high sensation seeker), I put myself in situations that cause me to feel... high sensations. It's usually something physical that involves adrenaline, but it can be emotional as well. In general, I am a huge fan of feeling things. To me, experiencing emotion is the ultimate proof that I am a human being. The only problem is that I don't get to do it often enough. Pretty much everyone I know is unreliable and worthless, so assuming they will disappoint me is usually the proper approach. This prevents me from forming relationships that are rich and fulfilling. I really want to, but letting my guard down around people that suck is detrimental to achieving my life goal. I only "let go" when I am around someone I feel will help me get to where I want to be (this is a very general statement, they don't have to physically help me or even be aware that they are helping me). So far this has happened once and eventually turned out that I was wrong about that person and it was a disaster. It did not affect my goal, so I would do it all over again if I got the chance, but that's probably the HSS talking. Point is, everything I do comes down to the goal. I would love to follow my feelings, but sometimes the path they lead down is detrimental to what I need to accomplish, so I choose not to go down it, denying myself the emotional gratification that I would have gotten.

Taking a step back and looking at the big picture, me and Cog. are really splitting hairs here. I agree that emotions need to be controlled and I am very strict about mine. What I don't agree with is how easy it is to control them (it takes significant effort to construct a proper self-delusion because it has to be grounded in facts) and I possibly disagree on the reasons to keep them at bay, we didn't really get into that yet. Perhaps we can resume this some day...
 

Razare

Well-Known Member
Local time
Today 12:31 PM
Joined
Apr 11, 2009
Messages
633
-->
Location
Michigan - By Lake Michigan
I have one issue with it so far...

"The human mind is at an infinitesimal level of this infinite hierarchy. If consciousness is the essence of physical structure, then we are at an infinitesimal fragment of potential conscious experience. Whatever ecstatic, wondrous experience, whatever ultimate orgasm any being ever experiences. It is the merest hint of a shadow of what can be and that will always be the case."

Limited energy and resources put a finite limit on this. I'm not saying we're anywhere near the limit, oh no, not at all. The ceiling is so high up there it is unfathomable, but I believe it to be up there. The universe expands at a finite rate, it is believed there is a finite period in which the universe exists, both of these constrain maximum possible potential.

What I believe to be possible is an existence of maximum utilization with perfect efficiency. This place has been known about for centuries by many people, often called Nirvana or Heaven. It can be understood in an infinite number of ways, but remember understanding is different than reality. Understanding is a mental representation of something, and does not mirror nor fully represent the actual object. This means an object can be understood in many ways, being bickered and argued about, yet be fundamentally the same object.

Heaven is like a painting that artists debate, each with their own take.

Now I will stipulate, the maximum limit may not be a finite limit. This limit perhaps grows at a finite or infinite pace. Perhaps maximum utilization and perfect efficiency is an ongoing process that perfectly keeps pace with an ever expanding limit.

I came to this understanding several years ago. What this guy has had a revelation about is what many other people have already had a revelation about over the centuries of intelligent thought. Each time this is understood, however, the understanding is more specific and detailed because human knowledge is ever expanding. The concept is the same, though.
 

Citizen X

Active Member
Local time
Today 5:31 PM
Joined
May 27, 2009
Messages
115
-->
I like the idea of the singularity; I am overtly interested in the ways technology, media and architecture shape human intelligence. However I cannot help it but think of it as a more attractive form of Rapture. In fact, I find it in no way different to Frank Typler's trippy "Omega Point" postulate, which for all practical purposes is the technological God a hardcore Atheist could actually believe in.

What I think about the singularity is best described by a man more intelligent than myself. Taken from Thomas Ligotti's "The Conspiracy against the Human race":

To this shortlist of hokum should be added one of the wilder prognostications of
“futures studies.” According to one gang of futurists, a breakthrough event pompously ennobled as the “the Singularity” will occur. What the fallout of the Singularity might be is unknown. It could begin a dynamic new chapter in human evolution . . . or it could trumpet the end of the world. The prophesized leap will be jumpstarted by computer gadgetry and somehow will involve artificial intelligence, nanotechnology, genetic engineering, and other habiliments of high technology. According to another gang of futurists, the Singularity will not happen: we will go on with our lives as stumblebums of the same old story, puppets of a script we did not write and cannot read.

Understandably, the former view is more exciting than the latter, the more so in that an apocalypse has been inserted as a wild card. In this sense, the Singularity is the secular counterpart of the Christian rapture, and its true believers foresee it as happening within the lifetime of many who are alive today, as the earliest Christians, not to mention those of subsequent ages, believed in the imminence of Judgment Day. Whether heaven or hell awaits us, the critical aspect of the Singularity is that it provides a diversion for those among the technological elite who are ever on the lookout for twinkling baubles to replace the ones with which they have grown bored. The Singularity encapsulates a
perennial error among the headliners of science: that there has never been nor will ever be the least qualitative difference between the earliest single-celled organisms and any human or machine conceivable or not conceivable in a world whose future is without a destination. That we are going nowhere is not a curable fate; that we must go nowhere at the fastest possible velocity just might be curable, although probably not. Either way, it makes no difference. (Zapffe deplored technological advancements and the discoveries to which they led, since those interested in such things would be cheated of the distraction of finding them out for themselves. Every human activity is a tack for killing time, and it seemed criminal to him that people should have their time already killed for them by explorers, inventors, and innovators of every stripe. Zapffe reserved his leisure hours for the most evidently purposive waste of time—mountain climbing.)

Like Scientology, the Singularity was conceived by someone who wrote science fiction. One of its big-name proponents, the American inventor Raymond Kurzweil, established a regimen of taking 250 nutritional supplements per day in hopes of living long enough to reap the benefits of the Singularity, which may include an interminable life-span among its other effects. It is as easy to make fun of religious or scientific visionaries as it is to idolize them. Which attitude is adopted depends on whether or not they tell you what you want to hear. Given the excitements promised by the Singularity, odds are that it will collect a clientele of hopefuls who want to get a foot in the future, for nobody doubts that tomorrow will be better than today. More and more it becomes clear that if indeed human consciousness is a mistake, it is the most farcical one this planet has ever seen.
 

Artifice Orisit

Guest
I like the idea of the singularity; I am overtly interested in the ways technology, media and architecture shape human intelligence. However I cannot help it but think of it as a more attractive form of Rapture. In fact, I find it in no way different to Frank Typler's trippy "Omega Point" postulate, which for all practical purposes is the technological God a hardcore Atheist could actually believe in.
Because of the name it's easy to think of it as a single sudden event in history that changes everything (like "judgement day" from the terminator franchise) when in fact it's far more likely that as the rate of technological advancement increases people will get used to it, even learn to expect it; one example could be the ever increasing speed of computers and how people get annoyed that it's so hard to keep up, so they compensate with flexi-renting and other tech sharing schemes that make keeping up easier.

By the time strong AI appears nobody will really care.
You can see it all the time, some new breakthrough is made in science and nobody cares, they've seen it all before, it's just business as usual, people seem to forget that these breakthroughs are cumulative.

In our lives we will see more change than has ever been seen in a life-time before, as each consecutive generation before us did.

Just compare the early nineties to now...
what will another two decades bring? then another two?
 

Citizen X

Active Member
Local time
Today 5:31 PM
Joined
May 27, 2009
Messages
115
-->
I don't doubt the future will bring strange sights and wonders, long as we don't die off as a species. The one thing about the future we can be certain of is how it doesn't turn out to be like we imagine it to be.

My "beef" with the Singularity is the way some very smart people seem to take it as if some form of religious rapture that will solve all of our carnal problems, even death itself, placing all of their trust and faith into the magical machinery of a future that still is uncertain. Ever read a book called "The Physics of Immortality", by Frank Typler? It's a long book where Tyler takes Pier de Chardin's Gnostic/Christian idea of "The Omega Point" and turns it into a technological singularity that runs every possible quantum state in a multiple level simulation near the end of Time itself, while the Universe dies during the Big Crunch. Upon discovering that the Universe doesn't show any signs of slowing its expansion down, Typler then went to update his version of the Omega point to accommodate to the present understanding of the Universe. Typler equates the Omega point with God (THE Christian god, no less) and the simulations as life after death as promised by Jesus, and he does so with a straight face. In any case, the point being that all these models, as wonderful and brilliant as they might be, I personally see them as nothing else but rationalized future day mysticism designed to give some solace to everyone's fear and anxiety towards death, no different to Heaven or Valhalla.

To be honest, we don't even know if the singularity will happen at all. There are limits to everything, maybe we as a technological species will ultimately reach a limit, and the way these limits can present themselves could be as dull as physical constrains or as magnificent as total annihilation, like in Arthur C. Clarke's otherwise unremarkable 3001: The Final Odyssey, where it is theorized that many strange bursts of energy picked up by radiotelescopes on Earth, coming from regions where "there's nothing", might be entire civilizations blowing themselves away after taping on zero point energy or some exotic technology going out of control.

Whatever the case might be, I do know something. The future will be a strange place, probably stranger than anything we might have imagined this far.
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
One of its big-name proponents, the American inventor Raymond Kurzweil, established a regimen of taking 250 nutritional supplements per day in hopes of living long enough to reap the benefits of the Singularity

While I don't agree with most of what Kurzweil says, I'd just like to point out that the dude is 61 and he does not look older than 40. Last year one of my roommates was watching TV and Glen Beck on CNN was interviewing RK and I remember being shocked that he was 60 years old. Maybe it was the makeup people on CNN, I don't know. It's really not important, I just wanted to share.


As for the singularity in general, I'm a little most pessimistic about it than most. I share the view of Yudkowsky and others that it's very easy to fuck this up and have an intelligence go transhuman and promptly convert the entire solar system to paper clips or something stupid like that. In the AI lingo, that would be a "paperclipper" going "foom" by the way. With sufficient hardware (courtesy of Dr. Moore and ~20 years), it's going to be (relatively) very easy to create a process capable of improving itself. How quickly it can/will actually improve is a matter of no small debate, but at some point it's going to figure out quantum physics and optimize the hell out of whatever process it's supposed to be doing. "Sweet there is a huge of hunk of metal over there that I can turn into paperclips! What do you mean it was a school bus full of kids? You only programmed me to make paper clips... All the while converting more school buses and anything else into paper clips."

In addition to just yapping about the topic on forums, I'm actually looking for ways to do this right. I did my MS thesis on machine learning (finished about 3 weeks ago), which was the closest thing to AI at my school. Right now I'm looking for PhD programs where I can really dedicate myself to the task. Because what the hell is the point of writing web pages or accounting software if any misguided yahoo can wipe us all out in 20 years? I say that while doing the aforementioned things, because AI does not pay the bills quite yet, but you get my sentiment.
 

Citizen X

Active Member
Local time
Today 5:31 PM
Joined
May 27, 2009
Messages
115
-->
While I don't agree with most of what Kurzweil says, I'd just like to point out that the dude is 61 and he does not look older than 40. Last year one of my roommates was watching TV and Glen Beck on CNN was interviewing RK and I remember being shocked that he was 60 years old. Maybe it was the makeup people on CNN, I don't know. It's really not important, I just wanted to share.

I'm 27 years old but look like I'm 23. :D


As for the singularity in general, I'm a little most pessimistic about it than most. I share the view of Yudkowsky and others that it's very easy to fuck this up and have an intelligence go transhuman and promptly convert the entire solar system to paper clips or something stupid like that.

I have thought about similar things like this.

Not long ago I was watching one of George Carlin's numbers, a rant about environmentalists and he mentioned, tongue in cheek, something that made me chuckle but think at the same time. It went something like this: "Maybe the only reason the Planet allowed us to be sprung up in the first place is because it wanted plastic for itself, but didn't know how to make it, needed us. We can be phased now"

I know this borders in metaphysical musings, but stay with me, it's good workout for the imagination: What if Nature itself is somewhat "aware" of itself and its limitations? Nature is not perfect, it is constantly changing and improving aspects of itself, but these changes take many years, whereas, say, digital technology is improved at a quicker pace. What if Nature wants a device to accelerate its own evolution but doesn't "know" how to make it?

I can imagine a future where the basic functional tenets of molecular nanotechnology have already been established and are working perfectly, a future where shortly after this breakthrough is stablished something happens that depletes the entire human population, maybe some super virus. I can then picture a far future where Nature has transgressed its own limitation and has integrated itself perfectly with high end nanotechnology, resulting a new, better "Nature"

Just an idea, I'm just a lowly frustrated architect, not a computer scientist or molecular biologist :D
 

dents

Member
Local time
Today 12:31 PM
Joined
May 1, 2009
Messages
70
-->
Location
Sunnyvale, CA
I suppose it's technically possible, but it seems needlessly complicated. http://lesswrong.com/lw/kr/an_alien_god/ is a good article about nature and just how aware it is. I promise it has nothing to do with aliens or religion.
 

Citizen X

Active Member
Local time
Today 5:31 PM
Joined
May 27, 2009
Messages
115
-->

Agent Intellect

Absurd Anti-hero.
Local time
Today 12:31 PM
Joined
Jul 28, 2008
Messages
4,113
-->
Location
Michigan
his predictions, while interesting and fun, seem a little too optimistic to me.

2010
  • Supercomputers will have the same raw power as human brains (although not yet the equivalently flexible software).
  • Computers will disappear as distinct physical objects, meaning many will have nontraditional shapes and/or will be embedded in clothing and everyday objects.
  • Full-immersion audio-visual virtual reality will exist.
2010s
  • Computers become smaller and increasingly integrated into everyday life.
  • More and more computer devices will be used as miniature web servers, and more will have their resources pooled for computation.
  • High-quality broadband Internet access will become available almost everywhere.
  • Eyeglasses that beam images onto the users' retinas to produce virtual reality will be developed. They will also come with speakers or headphone attachments that will complete the experience with sounds. These eyeglasses will become a new medium for advertising as advertising will be wirelessly transmitted to them as one walks by various business establishments.
  • The VR glasses will also have built-in computers featuring "virtual assistant" programs that can help the user with various daily tasks. (see Augmented Reality)
  • Virtual assistants would be capable of multiple functions. One useful function would be real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses.
  • Cell phones will be built into clothing and will be able to project sounds directly into the ears of their users.
  • Advertisements will utilize a new technology whereby two ultrasonic beams can be targeted to intersect at a specific point, delivering a localized sound message that only a single person can hear. This was demonstrated in the films Minority Report and Back to the Future 2. See Sound from ultrasound.
2014
  • Automatic house cleaning robots will have become common.
2018
  • 10 Terabits (1013 bits) of computer memory—roughly the equivalent of the memory space in a single human brain--will cost $1000.
2020
  • Personal computers will have the same processing power as human brains.
2020s
  • Computers less than 100 nm in size will be possible.
  • As one of their first practical applications, nanomachines will be used for medical purposes.
  • Highly advanced medical nanobots will perform detailed brainscans on live patients.
  • Accurate computer simulations of the entire human brain will exist due to these hyperaccurate brainscans, and the workings of the brain will be understood.
  • Nanobots capable of entering the bloodstream to "feed" cells and extract waste will exist (though not necessarily be in wide use) by the end of this decade. They will make the normal mode of human food consumption obsolete. Thus, humans who have injected these nanobots into their bloodstream will evolve from having a normal human metabolism and become humanoid cyborgs. Eventually, according to Kurzweil, a large percentage of humans will evolve by this process into cyborgs.
  • By the late 2020s, nanotech-based manufacturing will be in widespread use, radically altering the economy as all sorts of products can suddenly be produced for a fraction of their traditional-manufacture costs. The true cost of any product is now the amount it takes to download the design schematics.
  • Also by the later part of this decade, virtual reality will be so high-quality that it will be indistinguishable from reality.
  • The threat posed by genetically engineered pathogens permanently dissipates by the end of this decade as medical nanobots--infinitely more durable, intelligent and capable than any microorganism--become sufficiently advanced.
  • A computer will pass the Turing test by the last year of the decade (2029), meaning that it is a Strong AI and can think like a human (though the first A.I. is likely to be the equivalent of a kindergartner). This first A.I. is built around a computer simulation of a human brain, which was made possible by previous, nanotech-guided brainscanning.
2025
  • The most likely year for the debut of advanced nanotechnology.
  • Some military UAVs and land vehicles will be 100% computer-controlled.
2030s
  • Mind uploading becomes possible.
  • Nanomachines could be directly inserted into the brain and could interact with brain cells to totally control incoming and outgoing signals. As a result, truly full-immersion virtual reality could be generated without the need for any external equipment. Afferent nerve pathways could be blocked, totally canceling out the "real" world and leaving the user with only the desired virtual experience.
  • Brain nanobots could also elicit emotional responses from users.
  • Using brain nanobots, recorded or real-time brain transmissions of a person's daily life known as "experience beamers" will be available for other people to remotely experience. This is very similar to how the characters in Being John Malkovich were able to enter the mind of Malkovich and see the world through his eyes.
  • Recreational uses aside, nanomachines in peoples' brains will allow them to greatly expand their cognitive, memory and sensory capabilities, to directly interface with computers, and to "telepathically" communicate with other, similarly augmented humans via wireless networks.
  • The economy transits in GDP percentage to more meta services such as reality fabrication, mind enhancement, mental software. The GDP percentile of simulated, beamed, and argumented pornography will increase from 0.5% to over 10%, as production techniques reduce physical production costs of real things.
  • The same nanotechnology should also allow people to alter the neural connections within their brains, changing the underlying basis for the person's intelligence, memories and personality.
2040s
  • Human body 3.0 (as Kurzweil calls it) comes into existence. It lacks a fixed, corporeal form and can alter its shape and external appearance at will via foglet-like nanotechnology. Organs are also replaced by superior cybernetic implants.
  • There will be social splitting into different levels of use of reality argumentation, from those who want to live in a life of imagined harems, or those who dedicate their thoughts to philosophical extension. Human society will drift apart in its focus, but with ever increasing capabilities to make imagined things occur.
  • People spend most of their time in full-immersion virtual reality (Kurzweil has cited The Matrix as a good example of what the advanced virtual worlds will be like, without the dystopian twist).
  • Foglets are in use.
2045: The Singularity
  • $1000 buys a computer a billion times more intelligent than every human combined. This means that average and even low-end computers are hugely smarter than even highly intelligent, unenhanced humans.
  • The Singularity occurs as artificial intelligences surpass human beings as the smartest and most capable life forms on the Earth. Technological development is taken over by the machines, who can think, act and communicate so quickly that normal humans cannot even comprehend what is going on; thus the machines, acting in concert with those humans who have evolved into humanoid androids, achieve effective world domination. The machines enter into a "runaway reaction" of self-improvement cycles, with each new generation of A.I.s appearing faster and faster. From this point onwards, technological advancement is explosive, under the control of the machines, and thus cannot be accurately predicted.
  • The Singularity is an extremely disruptive, world-altering event that forever changes the course of human history. The extermination of humanity by violent machines is unlikely (though not impossible) because sharp distinctions between man and machine will no longer exist thanks to the existence of cybernetically enhanced humans and uploaded humans.
Post-2045: "Waking up" the Universe
  • The physical bottom limit to how small computer transistors can be shrunk is reached. From this moment onwards, computers can only be made more powerful if they are made larger in size.
  • Because of this, A.I.s convert more and more of the Earth's matter into engineered, computational substrate capable of supporting more A.I.s. until the whole Earth is one, gigantic computer (but some areas will remain set aside as nature preserves).
  • At this point, the only possible way to increase the intelligence of the machines any farther is to begin converting all of the matter in the universe into similar massive computers. A.I.s radiate out into space in all directions from the Earth, breaking down whole planets, moons and meteoroids and reassembling them into giant computers. This, in effect, "wakes up" the universe as all the inanimate "dumb" matter (rocks, dust, gases, etc.) is converted into structured matter capable of supporting life (albeit synthetic life).
  • Kurzweil predicts that machines might have the ability to make planet-sized computers by 2099, which underscores how enormously technology will advance after the Singularity.
  • The process of "waking up" the universe could be complete as early as 2199, or might take billions of years depending on whether or not machines could figure out a way to circumvent the speed of light for the purposes of space travel.
  • With the entire universe made into a giant, highly efficient supercomputer, A.I./human hybrids (so integrated that, in truth it is a new category of "life") would have both supreme intelligence and physical control over the universe. Kurzweil suggests that this would open up all sorts of new possibilities, including abrogation of the laws of Physics, interdimensional travel, and a possible infinite extension of existence (true immortality).
Some indeterminate point within a few decades from now
  • Space technology becomes advanced enough to provide the Earth permanent protection from the threat of asteroid impacts.
  • The antitechnology "Luddite" movement will grow increasingly vocal and possibly resort to violence, possibly a new World War, as these people become enraged over the emergence of new technologies that threaten traditional attitudes regarding the nature of human life (radical life extension, genetic engineering, cybernetics) and the supremacy of mankind (artificial intelligence). Though the Luddites might, at best, succeed in delaying the Singularity, the march of technology is irresistible and they will inevitably fail in keeping the world frozen at a fixed level of development. However, some nature preserves may be set aside for them to live in.
  • The emergence of distributed energy grids and full-immersion virtual reality will, when combined with high bandwidth Internet, enable the ultimate in telecommuting. This, in turn, will make cities obsolete since workers will no longer need to be located near their workplaces. The decentralization of the population will make societies less vulnerable to terrorist and military attacks.
 

Marino

Redshirt
Local time
Today 12:31 PM
Joined
Apr 21, 2009
Messages
14
-->
Mods, feel free to delete this if it counts as "advertisement":

I recently created a Transhumanist forum at TranshumanistForums.com and it is affiliated with AcceleratingFuture.com.

If you are interested I would be glad to have all Transhumanists here as members.

Just so you know, we are just JUST starting out and my co-admin and I are trying to think of ways to promote the forum.

Sign up if you want to discuss Transhumanism and be with fellow Extropians/Singulitarians/Transhumanists :p

Thanks. :)

--
Again, sorry if this violates the "advertisement" rule, I just thought it is relevant to the thread. :D
 

Artifice Orisit

Guest
Regarding Ray Kurzweil's predictions,
Supercomputers will have the same raw power as human brains (although not yet the equivalently flexible software).
That would be a supercomputer in computational R&D lab somewhere, such things are often so huge the fill the inside of an entire building and equipped with the best technology available; so in my opinion it is a bold claim to make, but it shouldn't be too far from the truth, perhaps a year or two off.
Computers will disappear as distinct physical objects, meaning many will have non-traditional shapes and/or will be embedded in clothing and everyday objects.
Miniature expendable microchips have been around for a while, but it's they're very simple.
Full-immersion audio-visual virtual reality will exist.
Audio & visual isn't exactly what I'd call "full-immersion" but his point is still valid and the technology exists already, although it's quite rudimentary.

As his predictions progress forwards through time his ability to be accurate is diminished, however as far as I can tell they're accurate enough to make his point; that point being that in the next few decades this world will undergo an unprecedented level of technological change.

The antitechnology "Luddite" movement will grow increasingly vocal and possibly resort to violence, possibly a new World War, as these people become enraged over the emergence of new technologies that threaten traditional attitudes regarding the nature of human life (radical life extension, genetic engineering, cybernetics) and the supremacy of mankind (artificial intelligence). Though the Luddites might, at best, succeed in delaying the Singularity, the march of technology is irresistible and they will inevitably fail in keeping the world frozen at a fixed level of development. However, some nature preserves may be set aside for them to live in.
I'm pretty sure current lingo for these luddites groups is "Terran".
These Terrans are supposedly going to be from developing nations, extreme religious groups and will unfortunately outnumber the Transhumanists by two or three to one at least. This being unfortunate because most will be from underdeveloped countries which can barely support them as it is and so driven by the need for resources (not obviously of course, but poverty breeds conflict) they will proceed with extremist tactics on the developed Transhumanist nations. The recent conflict between the US and several Middle Eastern countries makes a great example of what is to come, it was more of a pruning operation than a war.

My predictions:
1. Things are going to get worse before they get better.
2. One man’s Judgement day is another man's Singularity.
 

EloquentBohemian

MysticDragon
Local time
Today 12:31 PM
Joined
Oct 4, 2008
Messages
1,386
-->
Location
Ottawa, Canada
Post-2045: "Waking up" the Universe
  • The physical bottom limit to how small computer transistors can be shrunk is reached. From this moment onwards, computers can only be made more powerful if they are made larger in size.
  • Because of this, A.I.s convert more and more of the Earth's matter into engineered, computational substrate capable of supporting more A.I.s. until the whole Earth is one, gigantic computer (but some areas will remain set aside as nature preserves).
  • At this point, the only possible way to increase the intelligence of the machines any farther is to begin converting all of the matter in the universe into similar massive computers. A.I.s radiate out into space in all directions from the Earth, breaking down whole planets, moons and meteoroids and reassembling them into giant computers. This, in effect, "wakes up" the universe as all the inanimate "dumb" matter (rocks, dust, gases, etc.) is converted into structured matter capable of supporting life (albeit synthetic life).
  • Kurzweil predicts that machines might have the ability to make planet-sized computers by 2099, which underscores how enormously technology will advance after the Singularity.
  • The process of "waking up" the universe could be complete as early as 2199, or might take billions of years depending on whether or not machines could figure out a way to circumvent the speed of light for the purposes of space travel.
  • With the entire universe made into a giant, highly efficient supercomputer, A.I./human hybrids (so integrated that, in truth it is a new category of "life") would have both supreme intelligence and physical control over the universe. Kurzweil suggests that this would open up all sorts of new possibilities, including abrogation of the laws of Physics, interdimensional travel, and a possible infinite extension of existence (true immortality).
What bothers me about this is its blatant anthropocentric thinking. Somehow, Kurzweil has neglected to consider that there is a high probability of other sentient beings in the universe who may be much more advanced and unwilling to submit to this concept of an entire universe made into a giant, highly efficient supercomputer.
This assumes also, that planets, asteroids, suns, etc. - including the whole universe - is ours for the taking.
I have no problem with the technological enhancement and life-extention properties of our scientific and technological advancements - even to the extent of immortality, but to acquiese to a homogeneous mass singularity clashes with my sense of autonomous individuality, which would be my desired goal.
This view strikes me as an updated cybernetic model of a mechanistic universe. This view does not take into account human will or individuality, neither does it address the question of what the animating factor inherent in all life, sentient or not, is. It assumes that this is the best and most appropriate path for human evolution. It reads as a cybernetic/technological communism, supremacism or social Darwinism.
 

Artifice Orisit

Guest
This assumes also, that planets, asteroids, suns, etc. - including the whole universe - is ours for the taking.
The taking may be difficult, but yeah it's all there, what's the problem?
C'mon, be true to what you are :D a Human.
 

walfin

Democrazy
Local time
Tomorrow 12:31 AM
Joined
Mar 3, 2008
Messages
2,436
-->
Location
/dev/null
Survival and reproduction are the reason why our thinking and feeling brains evolved in the first place.

What's wrong with being anthropocentric? It is our mission to ensure the survival and reproduction of our species.

That said I hope the Singularity never comes, and the rate of increase in human intelligence will forever remain higher than the rate of increase of robotic intelligence.
 
Top Bottom