Auburn
Luftschloss Schöpfer
- Local time
- Today 5:37 AM
- Joined
- Sep 26, 2008
- Messages
- 2,298
1. Your data will come up with groupings. But I don't think those groups will necessarily be natural. It is up to the observers (as you) to decide perhaps how many groups you would like depending on how "alike" the data is. To one person the data may look like 10,000 patterns; to another only 20 or 16. It's partly up to the observer to decide what number is useful and handy.
Right. The threshold percentile of compatibility can be moved up or down and the data would spit out a different number of motus signatures. But that's not to be confused with saying the data concludes arbitrarily.
The data can be viewed at different thresholds just like how you can change the contrast of a picture in Photoshop. But the picture it would generate would still be real. Or another way to look at it is like an elevation-map of a terrain. As you adjust the altitude, the picture you get is different --- but all of those individual altitude marks are vertical slices of the same real landscape.
The data could be displayed three-dimensionally like a geographical map to see what landscape it forms and how many "peaks" exist at all different elevations (thresholds). So the formation of Motus types *does* emerge entirely naturally. There are just different ways of looking at it which are all part of the same whole picture.
We'll cross that bridge when we get to it. Selecting the right EEG equipment and deciding on a functional setup (and things like interview questions/activities) to use will all be discussed extensively before we begin that stage. As well as looking at the research that exists out there already on what brain scans can tell and what signal means what.2. Apparently to date, brain scans are too unrefined to draw refined conclusions from. Not so the converse. Run a test and see what brain scan you get to verify the setup. Great test for verification.

....I can't even begin to address why this isn't even a question. o.o;3. I posted this not realizing it was in the Forum Lounge. It belongs under MBTI & Typology. I wonder if you saw it? It raises the question of whether a type belongs to a person or to a "mood" of a person. I may have missed it, but do you intend to run your tests on the same person over a period of time to see if they test the same? Set up different conditions or wait until the person is in a different state and see if that changes the visuals. My guess is it sure would. Consider: happy/sad, laid back/eager, needy/ satisfied ... things like that.
It *is* representative of the person. Just not the whole person in every possible situation. Which is fine because if we wanted to catalog every possible situation we'd literally have to record their whole life! O:I just replayed your video and ask this question: Are you free of assumptions? If your data is derived from a snapshot of a person, that snapshot need not be representative of the person. That is an assumption. To say people are the same under different circumstances would be a false assumption. (It's motion. Motion is one of the six tools for understanding.)
We also have to think practically here. In science, it's standard to create studies that have the least number of loose variables as possible. And it's just realistic to repeat this whole study when a person is mad, repeat it again when each person is sad, and again when they're happy, etc etc. We'd be at this for centuries.
We can do it once when they're relatively neutral in mood - and honor that limitation of the study. That doesn't mean the data that emerges is tainted. Partial truth is different from lies.
Personally.... I really don't think people's mood changes their signals all that much. And if there really is a noteworthy parallel anyhow, then it would have to be stronger and more consistent than simple mood. If mood alone can dismantle the correlations then they're pretty weak correlations. But that doesn't seem to be the case so far.
