onesteptwostep
Junior Hegelian
- Local time
- Tomorrow 7:55 AM
- Joined
- Dec 7, 2014
- Messages
- 4,253
I just had a thought about AI. AI usually presents an aggregated form of information, and usually this information is within the past, or something that has developed so as to show up as metadata to inform the AI.
My thought is this: the driver of new ideas is not due to metadata, but something fresh and insightful, something which only a human can pinpoint and realize. We cannot train an AI to search for important insights, we can only benchmark it for certain qualitative criteria. In other words, we can only benchmark certain data because we are already hypothesizing something, and are waiting on the data to match that hypothesis.
I think matters on governmental policy, such as health policy, AI can be detrimental as it can mask empirical scientific data with public opinion. If there is enough drag on public opinion, actual, good health policy might be discarded to satisfy the needs of the public sentiment.
Now of course government policymakers I don't think would use AI, they would actually use research that has some principled guidelines and peer reviewed discipline, but I think AI can have a grip on public opinion in the future, as more and more people are literally connected to the internet the moment they are born.
I think the nature of people to only post their discontent will skew a lot of the perceptions that will be picked up by AI as it browses the net. We don't necessarily post that we're content, or that we're happy. Most people I think most people post because either they want attention or because they're angry about something, and they need a space to vent in. That I think will skew a lot of the AI to regugitate information that might not be progressive or forward thinking, but rather static, cynical, or institutional.
It's a half baked thought but yeah.
My thought is this: the driver of new ideas is not due to metadata, but something fresh and insightful, something which only a human can pinpoint and realize. We cannot train an AI to search for important insights, we can only benchmark it for certain qualitative criteria. In other words, we can only benchmark certain data because we are already hypothesizing something, and are waiting on the data to match that hypothesis.
I think matters on governmental policy, such as health policy, AI can be detrimental as it can mask empirical scientific data with public opinion. If there is enough drag on public opinion, actual, good health policy might be discarded to satisfy the needs of the public sentiment.
Now of course government policymakers I don't think would use AI, they would actually use research that has some principled guidelines and peer reviewed discipline, but I think AI can have a grip on public opinion in the future, as more and more people are literally connected to the internet the moment they are born.
I think the nature of people to only post their discontent will skew a lot of the perceptions that will be picked up by AI as it browses the net. We don't necessarily post that we're content, or that we're happy. Most people I think most people post because either they want attention or because they're angry about something, and they need a space to vent in. That I think will skew a lot of the AI to regugitate information that might not be progressive or forward thinking, but rather static, cynical, or institutional.
It's a half baked thought but yeah.