Deviation IQ is defined as one’s Z score (relative to the American or white population) on an intelligence test multiplied by usually 15 and added to 100. Z score = (raw score) – (average raw score)/(standard deviation of raw scores).

Occasionally Z scores will be normalized, so instead of being calculated from the above formula, they’re assigned based on percentiles. So if the top 1% of the population scored X and the normal curve says the top 1% has a Z score of +2.33, X will be assigned a Z score of +2.33 regardless of how many actual standard deviations it is above average.

However not all tests need to be normalized. Some should in theory form a very normal curve on their own. Does anyone know which two of the following tests need to be normalized?:

TEST A

A general knowledge test where the author thinks up general knowledge questions ranging in difficulty from very easy to very hard.

TEST B

A general knowledge test where you have to name the faces of the 100 most important people in history as chosen by Michael Hart’s famous book *The 100*.

TEST C

A spatial test where the author buys a bunch of jig-saw puzzles from the toy store, and your raw score is the number of puzzles you could put together in under 1 minute each.

TEST D

A spatial test where the author buys one jig-saw puzzle from the toy store and your score is simply how many seconds it takes you to solve it (the lower the better)

TEST E

A spatial test where the author buys one jig-saw puzzle from the toy store and your score is the number of pieces you can fit together in one minute.

TEST F

A memory test where your score is the greatest number of syllables in a sentence you can repeat after one hearing.