In the past, America was an overwhelmingly white country so the IQ distribution of American whites was virtually the same as the IQ distribution of all Americans (mean = 100, standard deviation = 15), but as demographics continue to change, on IQ tests where the mean and standard deviation of all Americans is set at 100 and 15 respectively, pre-generation X white Americans have a mean of about 102 (SD = 14.5) and post-generation X white Americans have a mean of about 103 (SD = 14.5).  The reason post-generation X whites score higher than pre-generation whites is not the Flynn effect (IQ tests are normed so that the mean of every birth cohort is 100) but because as the country becomes less white, the position of the average white moves to the right of the bell curve.

One way to avoid this source of confusion is for all IQ tests to be normed with reference to American or British whites, as opposed to Americans in general.  That way, regardless of whether America is 100% white or 1% white, an IQ of 100 will always reflect the intelligence of the average white American of your generation.  And even though white Americans today perform 2 standard deviations better on IQ tests today than they did 100 years ago, assuming the white gene-pool has remained static (some say it’s declined). and assuming IQ is highly genetic (some say it’s not), an IQ of 100 will always reflect roughly the same genetic level of intelligence (at least in theory).

So in order to convert IQs where the American mean and SD are 100 and 15 respectively onto a scale where the white American mean is 100 and 15 respectively, apply these formulas:

Pre-generation X:

White American IQ = [(American IQ – 102)/14.5][15]+100

Post-generation X:

White American IQ = [(American IQ – 103)/14.5][15]+100

For whatever reason, this has been come to be known in peer-reviewed scholarly journals as the “Greenwich IQ”, though usually it’s calculated simply by subtracting 2 points from American IQ scores (which is kind of sloppy because it ignores the fact that whites not only have a higher mean than Americans as a whole, but a narrower standard deviation).

I believe scholar Richard Lynn invented the concept, at least in the modern era.  From his 2006 book Race Differences in Intelligence, he writes:

The metric employed for the measurement of the intelligence of the races has been to adopt an IQ of 100 (with a standard deviation of 15) for Europeans in Britain, the United States, Australia, and New Zealand as the standard in terms of which the IQs of other races can be calculated. The mean IQs of Europeans in these four countries are virtually identical, as shown in Chapter 3 (Table 3.1), so tests constructed and standardized on Europeans in these countries provide equivalent instruments for racial comparisons.

In Britain, Australia, and New Zealand, the intelligence tests have been standardized on Europeans, and this was also the case in the United States in the first half of the twentieth century. In the second half of the twentieth century American tests were normally standardized on the total population that included significant numbers of blacks and Hispanics. In these standardization samples the mean IQ of the total population is set at 100; the mean IQ of Europeans is approximately 102, while that of blacks is 87 and of Hispanics about 92 (see, e.g., Jensen and Reynolds, 1982). This means that when the IQs of other races are assessed with an American test standardized with an IQ of 100 for the total American population, 2 IQ points have to be deducted to obtain an IQ in relation to 100 for American Europeans. This problem does not arise with the only British test used in cross-cultural studies of intelligence. This is the Progressive Matrices, which has been standardized on British Europeans.