Weird to think about in the context of driverless vehicles, especially where the AI's reasoning can't be made explicit. How could you distinguish a pragmatic/amoral sequence of 'decisions' in an ethical dilemma like that from an alien morality? Need a Turing test for moral agency or something.Today I changed my previous opinion about the trolley problem after listening to a lecture where it was discussed. The same lecture also sparked an interest in philosophy which has been previously quenched by picking up books by previous philosophers and having to force ones way through obtuse speech, inaccurate observations or stupid comparisons.
Anyway, I went from pragmatic want to kill 5 people to not doing anything. How is that even possible? Though, I guess it should be said I solidly find such hypothetical useful for developing thought, but not so for principles.
the painful lessons aren't so easy to forgetsome people learn better from pain (perhaps most people?). I think when kids are pampered too much and never experience the pain of failing or fucking up, they don't acquire self-regulatory and self-motivating capabilities. So when they do eventually fuck up (like everyone does at some point), they just become depressed and demotivated.
I think you can reward people for working hard without taking away the experience of pain, though. For instance, a group of friends and I practiced solos for ~half a year to take to a contest. Our director would listen to us each week and give us feedback, but he would give more of a verbal "reward" to those who made a lot of progress that week than to those who sounded good but hadn't really worked on their solo.serac said:some people learn better from pain (perhaps most people?). I think when kids are pampered too much and never experience the pain of failing or fucking up, they don't acquire self-regulatory and self-motivating capabilities. So when they do eventually fuck up (like everyone does at some point), they just become depressed and demotivated.
what if some people are invested in protecting a narrative and therefore won't ever admit the truth even if you prove it beyond reasonable doubt?
an area i am particularly interested in getting greater insight into is in relation to our engagement with technologyYou say you're an INFJ. What brings you to this forum?
In an environment like a forum i think that even a debate with an unreasonable person can still be productive because all the people reading the debate get to hear the arguments so that even if you don't change the mind of the person who has an agenda you may still bring a lot of interesting information and perspectives to the public discourseWell, here's what I mean by "consensus". With a reasonable person (I make a point of not talking in-depth with unreasonable people because it's frustrating), I think I can always either convince them with evidence or find some unprovable difference in belief that's causing us to disagree.
yeah i think you have summarised it wellFor instance, I'm an atheist and a number of the people I know are religious, so that will inevitably cause differences in opinion because of our immutable beliefs (well, more mutable in my case because if I die and end up burning down in the inferno I'll obviously have to change my mind). As another example, I tend to take a more cynical view towards human nature, but you can find evidence supporting both the goodness and badness of humans, so you can't really prove that view one way or the other until we make more advances in psychology.
But that's an interesting reason to come here. I think most INTPs honestly aren't really thinking about technology in terms of its impact on people's lives or its ethical implications. They're generally looking at the scientific underpinnings and mechanisms first. Where an INFJ might say, "we must destroy death rays because of the hazard they pose to us all", an INTP would be more likely to say "woah! Tesla wanted to build a death ray and now we finally have one; isn't that awesome?! How does it work?"
thankyou!Welcome to the forum, in any case.