When Alfred Binet invented the first intelligence test circa 1904, he needed a unit of measurement. After all, a stadiometer has inches; a spring scale has pounds; but what could be the unit of measurement for a trait as abstract as intelligence?
He decided on mental age. That is, if you performed as well on his test as an average white six-year old, you have a mental age of six, regardless of what your chronological age. If you performed as well as an average white ten-year-old, you had a mental age of ten etc.
Sometime later, psychologist William Stern suggest one take the ratio of the mental age to the chronological age, and then multiply by hundred to remove the decimal point, and the concept of IQ was born.
But from the beginning, there were problems with the IQ. For one thing, intelligence only increases as a function of age up until about 16 or so, and as we get much older our intelligence declines.
Secondly, even in childhood, the increase in intelligence as a function of age is not always linear.
Third, the scale lacked a true zero point. Someone who had the test score of a newborn baby would have a mental age of zero, but a baby does not have zero intelligence. It’s brain has been developing in the womb for 9 months prior to birth.
Today we no longer calculate IQs the way William Stern suggested, but we still make sure the average is around 100 and the standard deviation is around 15 or 16, since those were the stats the old age ratio scales yielded. But I long for the IQ scales of old, because despite their many flaws, at least they represented something tangible.
To once again anchor IQ tests in something tangible, I propose scores be converted not into mental age, as Binet proposed, but mental brain size. If you perform as well as the average white young adult male with a brain volume of 1400 cc, you have a mental brain volume of 1400, regardless of your actual brain volume. If you perform as well as the average white young adult male with a 1500 cc brain volume, you have a mental brain volume of 1500 etc.
This would anchor IQ scores in concrete physical reality and would be a true ratio scale with a true zero point.
Update, January 8, 2017:
Allow me to clarify the above post with an analogy:
Suppose we had a scale to measure weight, but it could not give us a meaningful number, it could only tell us that person A weighed more than person B. We didn’t know if person A was 500% heavier than B or just 1% heavier than B, all we knew was that he was heavier than B. All the scale could do was rank people.
Well, one was could make the weight scale more useful is we noticed that taller people weigh more than shorter people on average (lots of exceptions). We could then assign each person who stepped on the weight scale, a height, and say person A has the weight of an average 70 cm man, while person B has the weight of an average 60 cm man. Assuming the correlation between height and weight is linear, this would allow us to say person A is 17% heavier than person B, something we could not say when we only had a rank.
Of course this would not imply that height is the only cause of weight, or even causal at all; nor would it imply height and weight are strongly correlated. What it would do is provide a concrete easy to understand true ratio scale with a real zero point to which weight could be anchored
Currently IQ tests can tell us who scored higher than who, but they can’t tell us by what percentage person A exceeds person B on the actual ability in question, so the purpose of this post was merely to suggest they too need to be anchored in a concrete scale with a true zero point, and brain size is one such anchor out of many that might be used.