So back in 2020, Pew Research asked both Jewish Americans and all Americans about their income. This is an interesting question because in the famous NLSY database used to write the book The Bell Curve, the sample of Jews scored 0.87 standard deviations (SD) higher (13 IQ points) than Americans as a whole on the AFQT. Given about 0.39 correlation between IQ and household income (in a given year) (see table 3), we’d expect Jews to be 0.39(0.87) = +0.34 SD in income.

Source: https://www.researchgate.net/publication/229357056_Occupation_and_income_related_to_psychometric_g

The normalized distribution of Jewish household income

How close did my predictions come? Hard to say because household income is not normally distributed so applying linear regression to it is problematic.

One theoretical solution is to force income to a bell curve. This I did by comparing the highest and lowest income groups in the poll and finding their respective percentile with respect to both Jewish Americans and all Americans. Each percentile was than assigned its corresponding Z score on a theoretical normal curve.

Thus, if U.S. household income is forced to fit a normalized Z score curve which by definition has a mean and SD of 0 and 1 respectively, the Jewish mean and SD are 0.85 and1.2 respectively, about half an SD higher than expected from the IQ-income correlation.

Of course this is all very rough, because when two different distributions are FORCED to fit bell curves, one can’t assume the two bell curves will fit each other, but this is a very subtle point that few of you have the IQ to worry about.

Advertisement