I know what you meant the example to be for, I was using your example as a prototype for my own. Your example relies on the researcher using a biased sample to generalise to a greater population, mine was a researcher using the same population to answer a question more appropriate to the sample. What we would be sampling here is people who:
- believe themselves to be INTP
- use the INTP forum
- respond to the poll question
These all present potential confounding biases, but if we understand these limitations and adjust our interpretations to them, I really don't see how this is a problem. The hypothesis is falsifiable; the OP could walk away with results indicating no such pattern of IQ exists. I guess what I'm trying to understand is, why be critical of results before we have them? The OP implied they would be conservative in their interpretation of results by admitting from the get-go that the approach was unscientific.
I do think a gigantic confound exists in we still don't have a 'standard' standard deviation to go by. As you say, people might just report random IQ tests of the net, or even different SDs from the one provided.
By 'misinterpreting the results' I was speaking of over-estimating the generalisability of the results of this poll to the INTP population at large. That would premature at best.
It is interesting to know the IQ of people I talk to because intelligence is interesting. I would like to know the size of the discrepancy between how intelligent I perceive someone to be, and their IQ score. I generally surround myself with people I deem intelligent, not as a selection criteria but as a perceived pattern of my behaviour. I am currently in furious debate with several people about the value of IQ, intelligence, personality, and the MBTI in particular. This sort of thread doesn't only give questionable poll results either, it is a source of discussion on a general topic I find fascinating.