Many years ago there was a study called the Milwaukee project where poor kids born to very low IQ mothers received the most incredible intellectual stimulation imaginable from birth to age six. The study found that the stimulation raised the IQs of the kids in the Treatment group by dozens of points relative to the control group. However the strange thing was these IQ enhanced kids did not behave like you expect given their high test scores. In fact they performed just as badly at learning math as did their low IQ peers in the control group. It seems six years of the most intense intervention imaginable only raised their test scores but not their actual intelligence.

This suggests something deeply flawed about IQ tests if it’s possible to raise the measurement without actually raising the thing being measured. Where else in science do we see this happen? Maybe in election polling but I can’t think of many other places.

One reason this may happen is that IQ tests, in order to be relevant to the widest possible population, must express questions and problems in very generic ways. There’s only a very finite number of very generic problems so anyone with a good intervention or education is likely to have been trained on many of them. By contrast in real life, problems are not generic but context dependent and the number of specific contexts is infinite, and for this reason intelligence perhaps can’t be taught, even though IQ often can (depending on the test).

In the same way an Artificial Intelligence Bot like Chat GPA, which has been trained on the entire internet, can score quite high on a verbal IQ test and even write original poems, stories and news articles. But if based on its performance on these generic tasks you hired it to do something highly contextual, like write season 3 of HBO’s White Lotus, you would quickly discover it dramatically underperforms humans with the same verbal IQ.

Or to put it in Jensen-speak, it’s score is hollow with respect to g. The lights are on, but nobody’s home.