For years it was taken for granted that the correlation between IQ and brain size is 0.4. This was the correlation cited by authoritative IQ expert Arthur Jensen in his book The g Factor and this was the figure cited in a review by leading experts in the brain-size IQ correlation: J. Philippe Rushton and C. Davison Ankney.
A correlation between 0.33 and 0.5
A 2005 meta-analysis by Michael A. McDaniel found a correlation in the mid 0.30s. Close enough to Jensen and Rushton’s 0.4, I thought.
However in the year 2000, a paper by scholars John C Wickett, Philip A Vernon, and Donald H Lee found that correcting for range restriction (when correlations are too low because the study sample is too homogeneous) and attenuation, raised such correlations from 0.35 to a potent 0.5! Corrections for attenuation was irrelevant to me because I am interested in the correlation between brain size and IQ as it is actually measured, not the correlation between brain size and some theoretically perfectly reliable IQ test that doesn’t exist. So ignoring the latter correction, it seemed that correcting for range restriction alone raised the correlation from 0.35 to at least 0.45 (an increase of 0.1!). It seemed like if anything, Jensen and Rushton erred on the low side.
A correlation of only 0.24???
However in 2015, a new meta-analysis by scholars Jakob Pietschnig, Lars Penke, Jelte M. Wicherts, Michael Zeiler, and Martin Voracek surfaced claiming the brain size-IQ correlation was only 0.24! The paper argued that the 0.4ish figure that was typically cited was inflated by publication bias and these authors went out of their way to counter this. They started by creating a list of criteria studies needed to meet for inclusion in their meta-analysis and then began recording the brain-size IQ correlation in each study. The authors write:
In cases where these criteria were met, but correlation coefficients were not reported, corresponding authors were personally contacted by email and asked to provide the relevant results.
Asking scientists to report unpublished correlations is a good way to counter publication bias, and I applaud the authors for doing so, but then they wrote something that troubled me:
In a number of studies, correlation coefficients of non-significant associations of IQ and brain volume were not reported. Whenever this was the case, corresponding authors of the respective articles were contacted and correlation coefficients were obtained through personal communications. Otherwise, following a conservative approach as described by Pigott (2009, pp. 408-409), non-significant effect sizes were set to zero.
So it sounds like, if a study says “we found an insignificant correlation between IQ and brain size” but the correlation is unreported, they contact the author to find what that correlation was. But what if the study says “we found a significant correlation between IQ and brain size” and the correlation is not reported. Did they still contact the scientist? Actively seeking out unpublished correlations that are likely to be low (insignificant correlations) while not doing the same for unpublished correlations that are likely high (significant correlations) could bias the meta-analysis downward, however elsewhere in the paper they imply that all unreported correlations were solicited, so perhaps there was no such bias. However, their practice of assigning all unknowable insignificant correlations a value of zero seems like it would indeed bias the meta-analysis downward.
The other problem with this meta-analysis is that it included many studies that suffered from range restriction. As the above cited 2000 paper of scholars John C Wickett, Philip A Vernon, and Donald H Lee found, correcting brain-size IQ studies for range restriction increases the correlation by about 0.1. So it’s likely that if all the studies in this 2015 meta-analysis were corrected for range restriction, the correlation would rise from 0.24 to 0.34.
Conclusion: The true correlation is about 0.35
I have become suspicious of meta-analyses because they seem to consistently undermine established correlations between IQ and a wide range of variables, in favor of smaller more politically correct correlations. For example, Jensen claimed the correlation between IQ and income was 0.4, but a meta-analysis claims it is only 0.25. Scholar Richard Lynn claimed that black Africans score 33 IQ points lower than British whites, but a meta-analysis claimed they only score 20 points lower. Considering that meta-analyses have tried to undermine IQ’s correlation with such Darwinian variables as income and race, it’s not surprising that they would also undermine its correlation with brain size (the most Darwinian correlate of them all).
On the other hand, it could be that the meta-analyses are accurate and that HBDers have inflated these correlation by selective reporting. However a problem with meta-analyses is that crappy studies get lumped in with good ones, and the more error that gets included, the less likely you are to find a strong correlation between any two variables.
And so it is my opinion that the best way to get the truth is to look at the very best single studies ever done. Ironically, because IQ and especially brain size are hard to measure, the single best studies do not measure both variables directly.
For example, scholars Jensen and Sinha (1993) reanalyzed autopsy data reported by Passingham (1979) on 734 men and 305 women and found, independent of body size, an overall correlation between brain mass and achieved occupational level of roughly 0.25. The typical study correlating brain-size with IQ has only a couple dozen data, so using occupation as a proxy for IQ allows me to cite a single data-set with over 1000 people! Given a 0.7 correlation between IQ and occupation (Jensen 1998) and assuming the brain-size vs occupation correlation is entirely caused by the brain-size vs IQ correlation, then a 0.25 correlation between occupation and brain size implies a 0.36 correlation between IQ and brain size (0.25/0.7).
In addition, a study by Susanne (1979) found a 0.19 correlation between head perimeter and Matrices IQ in 2,071 Belgian male conscripts. Given that matrices are only about 89% as g loaded as Wechsler IQ scales, and assuming the head size vs IQ correlation is mediated by g, it’s reasonable to divided 0.19 by 0.89 to estimate the correlation would have been 0.21 had the conscripts been given a gold standard IQ test like WAIS. Further, given that the head-size vs IQ correlation is likely virtually entirely caused by the brain-size vs IQ correlation, the fact that head circumference correlates about 0.60 with MRI brain volume (Rushton & Ankney 2007) means that if the conscripts had their brain sizes measured directly, instead of by head perimiter, the correlation would have jumped further to 0.35 (0.21/0.6).
So two massive data sets on adults both agree that the correlation between brain size and IQ is about 0.35. Further, I have shown that even the anomalously low 2015 meta-analysis would have likely yielded a correlation of 0.35 had range restriction been corrected for. Thus, 0.35 is very likely to be the true correlation between IQ and brain size among (white) adults in Western countries when either sex or body-size is controlled. Jensen and Rushton’s finding of 0.4 was likely not nearly the overestimate as we have been led to believe.