• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

n-back training?

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
I read the University of Michigan study (I believe there was a follow-up study, as well) concerning n-back training and increases of gF (general fluid intelligence). I wonder, has anyone here employed these exercises and seen a substantial increase in IQ? Of course, the only test used was the RAPM, which consists of matrix puzzles. But, that is supposed to be an excellent measure of fluid intelligence.
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
There may have been some methodological flaws and confounds with that particular study. They strangely reduced the administration time for the Raven's and perhaps left the door open for practice effects. There is certainly a strong link between working memory and fluid intelligence though, namely because they share similar neuronal substrates. Both working memory and fluid intelligence have been increasingly correlated to g. I find this research quite exciting because it effectively brings testing from this somewhat wispy subject into material defensible under neuroscience and observable changes in the synapses, dendrites and blood flow within the brain.

The overhaul of the twentieth century was factor analysis and this may be the next thing. There are definitely other frontiers that can be tapped with the same basic motivations and forays. Crystallized intelligence might actually be the hardest to artificially and quickly lift because it's so contingent on education and a gradual assimilation of knowledge. Your best bet might be to elevate the processes that augment crystallized intelligence, like working memory and long-term storage and retrieval, and then force the kids to study. Maybe these guys are actually on the right track.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
Yeah, I was considering the practice effect as well, it seems waiting a year is standard practice for a re-test. I'm not entirely sure how large the gains were, though, but in an interview with the lead researcher, he stated that the gains kept building on themselves as long as the test subjects continued to train, but again practice effect somewhat applies here, though to my knowledge extreme gains aren't expected unless one is re-tested within a month, or a few months, etc.

Interesting write-up on the criticisms of the study, though I'm not saying they are wholly and legitimately dispelled, and I'm no psychologist and am untrained in evaluating those results.

A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (Gf), as measured in several different standard tests.[3] This resulted in some attention from popular media, including an article in Wired.[4] However, a subsequent criticism of the paper's methodology suggested questionable validity and a lack of uniformity in the tests used to evaluate the control and test groups.[5] For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (10 minutes were allowed to complete a normally 45-minute test). The authors of the original paper later addressed this criticism by citing research indicating that scores in timed administrations of the APM are predictive of scores in untimed administrations.[6]
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
Yeah, I was considering the practice effect as well, it seems waiting a year is standard practice for a re-test. I'm not entirely sure how large the gains were, though, but in an interview with the lead researcher, he stated that the gains kept building on themselves as long as the test subjects continued to train, but again practice effect somewhat applies here, though to my knowledge extreme gains aren't expected unless one is re-tested within a month, or a few months, etc.

Interesting write-up on the criticisms of the study, though I'm not saying they are wholly and legitimately dispelled, and I'm no psychologist and am untrained in evaluating those results.

Yeah I think the deal was the more exposure to the task, the more neuronal impact and, therefore, the more pronounced the effect. Essentially dose-depenedent results. About that one year timeline, Wechsler argued for three years for an omnibus multifaceted test. The more simple and linear a test, as a rule, the easier cheating the test would theoretically be. Well, later you mention extreme gains, but the issue, without reviewing the data, was less than one standard deviation. I would argue some of that could be accounted for by the elapsed time and previous exposure to the test. I still find the study a crucial bridging between psychology and neuroscience and good for both fields.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
I did state extreme gains, I just wasn't entirely sure how far up those gains went. I agree that exposure to the test and not enough time passing between tests most likely accounted for the increase of scores.

As for Wechsler arguing three years, true, but many psychologists today claim waiting one year is standard practice, would you agree with that? As far as I know, if a large increase is seen within a year's time (unlikely), it shouldn't be attributable to practice effects alone (although they would probably account for several points, perhaps more).

There was also a separate study done with the age of the test subjects being 12-16 years old, and the study re-tested them four years later. One individual saw an increase from 100 to 127, and one of the tentative conclusions posited was that education possibly played a factor, so your suggestion of sitting the kids down to study may be effective. I'll see if I can dig up that article.
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
I did state extreme gains, I just wasn't entirely sure how far up those gains went. I agree that exposure to the test and not enough time passing between tests most likely accounted for the increase of scores.

As for Wechsler arguing three years, true, but many psychologists today claim waiting one year is standard practice, would you agree with that? As far as I know, if a large increase is seen within a year's time (unlikely), it shouldn't be attributable to practice effects alone (although they would probably account for several points, perhaps more).

That sounds fine because the gains aren't that great - and they're downright negligible on crystallized intelligence tests. I found that surprising at first, considering how little effort it would be to look up and encode a few vocabulary items - but then I remembered how lazy folks tend to be. The three years figure shares an odd syncronicity with the IDEA guidelines regarding testing in the schools. In other words, clinical practice might be a different beast and the three year rule might be more bureaucratic formality than neuroscience-informed finding. Anyway, the practice effects even after a few months are tiny - the higher extreme being around four additional points for most age groups. I would worry more about fluid intelligence tasks because they're basically, in theory, attempting to gauge conceptualization given novel stimuli. If you know the task beforehand, that aim is crippled.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
Out of curiosity then, what are the average gains for gf tests (or segments of tests)? Let's say hypothetically, if someone waited a year or close to that timeframe, and scored much, much higher, how many points would you say roughly could be traced back to practice effects?
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
Out of curiosity then, what are the average gains for gf tests (or segments of tests)? Let's say hypothetically, if someone waited a year or close to that timeframe, and scored much, much higher, how many points would you say roughly could be traced back to practice effects?

I have some test manuals and recent psychology books. When I go upstairs I'll find out that data. Well, to address your message's second portion, the standard deviation is also pretty small which suggests blowing the ceiling off of the test the second time around is unlikely. I'll look up the data for fluid intelligence though, and anything else that pops out at me while I'm there. I could always scour the internet but I'd rather go to the source.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
Fair enough, no rush. And I agree blowing the ceiling off during a subsequent testing session is highly unlikely for most people, but there are statistical freaks out there, and it was meant as a pure hypothetical. However, I'm unaware of the largest, somewhat legitimate increase of score would be, whether on gc/gf tests, or full-fledged ones such as the Wechsler and Stanford-Binet.
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
Unfortunately that information is not automatically revealed with means and standard deviations because the range is still pretty much unknown. Sometimes psychometricians even throw out the outliers because it distorts the data too much.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
So, if a subsequent test score is too anomalous, it is thrown out (potentially)? Would they score that person with a different test, then?
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
So, if a subsequent test score is too anomalous, it is thrown out (potentially)? Would they score that person with a different test, then?

Whenever I have read reports dealing with huge amounts of data, they discard big statistical aberrations - because those distort measures of distribution and variance - while readily admitting to what they've done. They'll straight up stipulate the number and degree of aberration. As long as they disclose their procedures, there's nothing too ethically unreasonable with this practice. I have not seen this done with smaller data sets and test manuals too often though. The standard deviations for subtests is often less extreme that the standard deviations for the composite measure. Additionally, often you're dealing with a fairly homogenous population (e.g., alcoholics) with smaller data sets.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
Whenever I have read reports dealing with huge amounts of data, they discard big statistical aberrations, because those distort measures of distribution and variance, while readily admitting to what they've done. They'll straight up stipulate the number and degree of aberration. As long as they disclose their procedures, there's nothing too unreasonable with this practice. I have not seen this done with smaller data sets and test manuals too often though. The standard deviations for subtests is often less extreme that the standard deviations for the composite measure. Additionally, often you're dealing with a fairly homogenous population (e.g., alcoholics) with smaller data sets.


If I understand correctly, such aberrations are thrown out statistically so as to not skew the balance, but that person would still receive credit for their achieved score?
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
If I understand correctly, such aberrations are thrown out statistically so as to not skew the balance, but that person would still receive credit for their achieved score?

That person would absolutely still receive credit as long as the outlying score was deemed the result of veritable exceptionality in the individual and not negligence on the test's or administrator's part. You wouldn't punish some gifted kid for scoring well. These are really too separate issues. Often in a meta-analysis or something that looks at huge chunks of data it's misleading to include a few outliers which paint a inaccurate picture about the distribution and variance.
 

snafupants

Prolific Member
Local time
Today 1:16 AM
Joined
May 31, 2010
Messages
5,007
-->
I have some test manuals and recent psychology books. When I go upstairs I'll find out that data. Well, to address your message's second portion, the standard deviation is also pretty small which suggests blowing the ceiling off of the test the second time around is unlikely. I'll look up the data for fluid intelligence though, and anything else that pops out at me while I'm there. I could always scour the internet but I'd rather go to the source.

According to research done by Alan Kaufman on test-retest data from the WAIS-III my previous hypothesis seems to have been validated. The purest measure of fluid intelligence on that measure (matrix reasoning) showed one of the lower stability coefficients. Matrix reasoning (.77) bridged the picture arrangement (.69) subtest and picture completion (.79) subtest on the overarching performance index. Two high stability coefficients came from Vocabulary (.91) and Information (.94) on the larger verbal index. The digit span and letter number sequencing subtests showed high mean stability coefficients (~.85) and that makes intuitive sense because memory tests are almost impervious to learning, at least compared to some rudimentary fluid intelligence subtests, and because memory gradually declines with age but is relatively stable from month to month barring some global neuronal disaster or traumatic brain injury. Basically the results support this notion that fluid intelligence subtests are more vulnerable to the effects of learning because they rely on ignorance to the task beforehand; fluid intelligence, after all, is one's ability to reason and manipulate novel information. The reason verbal subtests are less hurt by practice effects is because they rely on gradual learning and education whereas although fluid intelligence purports to measure raw mental horsepower, results are seriously conflated with foreknowledge of the task as a big part of fluid intelligence tests tends to be inductive reasoning and figuring out the rule governing the activity in the first place. I would liken it to a softball/baseball batter knowing which pitch would be thrown beforehand: s/he would perhaps need skill to make contact but s/he does hold an unfair advantage over the competition to be sure.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
If I understood this correctly, Matrix Reasoning provides a less stable score upon re-testing, compared to the other subtests you mentioned? I see what you mean, tests of fluid intelligence rely on ignorance of the process before hand. I suppose I'm wondering then how a psychologist goes about validating a massive jump in composite score, which was due to increased performance on fluid reasoning tests. To provide some context, I know of someone who went from sub-100 (marginally), to the near genius range, overall. I've heard of other examples such as someone going from a 137 to a 170 after being medicated with ADHD meds. These people are aberrations surely, but how is their IQ "validated" as it were? Simply taking a different test?

As an aside, I also wonder how the wide availability of spatial IQ tests online has affected the outcomes of legitimate tests of fluid intelligence.
 

Antediluvian

Capitalist logic collides with external wisdom
Local time
Today 6:16 AM
Joined
Jan 21, 2012
Messages
164
-->
I suppose your over-arching point was that subsequent testing of fluid intelligence is complicated because of the nature of the tests, which as you say rely on reasoning through novel stimuli. If I'm reading it correctly, picture arrangement has the worst stability at .69, feel free to correct me here. I over-looked that at first.
 
Top Bottom