Imagine if we took 100 random American babies and randomly sent half of them to be raised in the worst home in America, and sent the other half to be raised in the best home in America. What would be the average difference in the IQs of the two groups by adulthood? Since there are about 123 million households in America, then assuming a normal distribution, the worst home in America would be 5.67 standard deviations (SD) below the mean in shared environment, and the best home would be 5.67 SD above the mean.

The worst home in America would probably be headed by some extreme psychopath who would lock the kids in a dark basement their whole childhood with no access to schooling, books, television and basic hygiene. The best home in America would headed by a benevolent billionaire with a PhD is child psychology who would be determined to raise a bunch of geniuses and thus surround each of the 50 kids with a dozen turtors who would tutor them in everything, every waking hour of the day.

According to scholar Arthur Jensen, the adult IQ correlation between unrelated people raised in the same home is -0.01. This is an incredibly low figure. Wikipedia claims the adult IQ correlation between unrelated people raised together is +0.04 which sounds a lot more believable. Averaging wikipedia’s figure with Jensen’s figure, gives a value of 0.015; meaning that 1.5% of the variation in adult IQ is explained by shared environment. Taking the square root of this correlation tells us that shared environment correlates 0.12 with adult IQ.

Assuming the correlation between IQ and shared environment is linear throughout the full range, we should expect the kids adopted into the worst home in America (-5.67) to have an IQ that is 0.12(-5.67) = -0.68 SD, and we would expect the kids adopted into the best home in America to have an IQ of 0.12(+5.67) = +0.68 SD.

In other words, we should expect kids adopted into the worst and best home in America to average IQs of 90 and 110 respectively. A difference of 20 IQ points. And even this difference would be largely spurious; that is because of subtle and obvious cultural biases on IQ tests, IQ tests would overestimate the ability of the kids from the best home and underestimate the ability of the kids from the worst homes, so the difference in their real intelligence would be much less than 20 IQ points.

Of course, all of this assumes that the adoption studies cited by Jensen and wikipedia are meaningful and can be generalized to American society as a whole. Commentator “Mugabe” seems to think that less than one fifth of the variance in (American) homes exists in adoptive homes. This is known as the range restriction problem.

Correcting for range restriction

It is well known that in samples where there is inadequate range, correlations tend to be spuriously low and unrepresentative of the correlation in the general population. For example, in the general population, the correlation between height and basketball skill is quite strong, as evidenced by the fact that Michael Jordan is a heck of a lot taller than the average American. But compared to other NBA players, Jordan is not tall at all suggesting little correlation between height and basketball skill among NBA players. This happens because virtually all NBA players are extremely tall. When everyone is of similar height; height makes little difference to basketball skill.

We see the same thing with IQ. Among elementary school kids, the correlation between IQ and grades is 0.65, but among law students, the correlation between LSAT scores and grades is 0.4 at the most. This is because elementary school kids have a much wider range of IQ for grades to correlate with than law students (who almost all have IQs above 105).

Similarly, there could be a range restriction issue in adoption studies. Since adoptive parents tend to be high quality people who are altruistic enough to raise unrelated kids and have enough money to afford them, bad homes are underrepresented in adoption studies, and there might simply not be enough range of environments for IQ to correlate with, hence adoption studies might suggest spuriously low correlations between IQ and shared environment. Fortunately, there are formulas to correct for range restriction (see formula 1 in this paper).

If we assume that environments of adoptive homes have only one fifth the variance of the general population (which is probably a huge exaggeration of the difference) then the environmental standard deviation would be only 45% as large. Armed with this data, I applied formula 1, and the correlation between IQ and shared environment more than doubled from 0.12 to 0.26. This means that the expected IQ difference between random kids randomlyadopted into the worst and best home in America would more than double. Instead of the difference being 20 points (IQ 90 vs IQ 110), the difference is now 44 points (IQ 78 vs IQ 122).

So assuming the HBD deniers are right about adoption studies suffering greatly from range restriction (which I doubt), then shared environment can produce huge differences in IQ, but only when the difference in shared en differences in shared environment are unbelievably extreme. Simply going from a typical black home to a typical Ashkenazi Jewish home is unlikely to raise an adopted child’s IQ by more than 5 points in adulthood, and even then, the difference would be largely spurious. In other words, the social environment when extremely good, can greatly improve culturally specific skills and attitudes that help one do well on an IQ test, but it probably doesn’t make one truly smarter. There’s a subtle difference between a good test taker and a genuinely intelligent person.

Advertisements