Tags

, , , , , ,

Commenter pumpkinhead has some questions which I posted below in red (with my answers in black).

1) What is the correlation of a childhood IQ test(say WISC) to an adult IQ(say WAIS)? 12 vs 18+ years old lets say…?

Below are all the studies I’ve found on the long-term stability of Wechsler IQ. The median correlation is 0.84.

 
Approximate age at initial testing Age at retesting Correlation Study sample size
2 9 0.56 Humphreys (1989) ?
2 15 0.78 Humphreys (1989) ?
9 15 0.47 Humphreys (1989) ?
9.5 23.5 0.89 Mortensen et al (2003) 26
29.7 41.6 0.73 Kangas & Bradway (1971) 48
50 60 0.94 Mortensen & Kleven (1993) 141
60 70 0.91 Mortensen & Kleven (1993) 141
50 70 0.90 Mortensen & Kleven (1993) 141

2) Is the 95% CI usually around 20 points at the average, gets narrower as the IQ increases and then gets wider again once we get to genius levels?

Confidence Intervals used in IQ testing assume a bivariate normal distribution and thus are the same at all IQ levels though the gap between one’s measured IQ and whatever variable it’s being used to estimate (i.e. “true” IQ) increases the further one’s measured IQ is from the mean. But the 95% confidence interval is always 1.96 multiplied by the standard error of the estimate.

3) Are IQ tests for <12 year olds less accurate, get more accurate for 12-17 yo and even more so for adults(18+)?

Even in early childhood the Wechsler IQ tests are incredibly reliable and load extremely high on g (the general factor of all cognitive abilities). But IQ correlates much less with DNA at younger ages so that might be telling us it’s much less accurate in childhood after all.

4) On a more anecdotal level Marylyn Vos Savant is reputed to have scored a 228 at 10(albeit with shoddy extrapolations) and then again in adulthood scored a 186 on the Mega test. That is a 42 point difference, what is the probability that someone could have such a gap with the WISC and WAIS?

The probability would increase the further you get from the mean. So assuming a 0.84 correlation between childhood and adult IQ, someone who was 128 IQ points above the mean (IQ 100) at age 10 (IQ 228), would be expected to be 0.84(128) = 108 points above the mean in adulthood (IQ 208) and we could say with 95% certainty that their adult IQ would be from 192 to 224.

Why did the prediction miss in Marilyn’s case? For starters The 1937 Stanford Binet she took at age 10 has a mean of 101.8 and a standard deviation (SD) of 16.4 while the Mega Test has a mean of 100 and an SD of 16. If both her scores were converted to the Wechsler scale (which uses a mean of 100 and an SD of 15), she would have scored 215 in childhood and 181 in adulthood. Then consider that the Stanford Binet was 19 years old when she took it, and old norms inflate test scores by as much as 3 points per decade (in the short-term) and her childhood score was really more like 209.

Then consider she took two different tests (the Stanford Binet at age 10 and the Mega in adulthood). Even at the same age, different IQ tests typically only correlate 0.8, so the 0.84 correlation between childhood IQ and adult IQ might be more like 0.84(0.8) = 0.67 when different tests are used at each age.

The expected adult IQ of someone who scores 109 points above the mean at age 10 (IQ 209) is 109(0.67) above the mean which equals IQ 173 (95% confidence interval of 151 to 195) so her childhood IQ actually underpredicted her adult IQ which is surprising since her childhood IQ was based on dubious extrapolation of the mental age scale.