Ethnic differences in IQ

Here’s some data from the WAIS-IV IQ tests about U.S. ethnic differences in IQ

raceIQ

Contrary to popular stereotypes, Asian Americans are not lopsided intellects who only outscore whites in mathematical or spatial ability, but actually outscore them in every index, including verbal.  Of course Asian Americans are not a representative sample of all Asians.

And contrary to the stereotype that whites are the most cognitively variable ethnic group, their full-scale IQs in this data-set actually show the smallest standard deviation of any U.S. ethnic group, though this could be be because Asian Americans are not a single race.

The scores above use U.S. norms, meaning 100 and 15 are scaled to be the mean and standard deviation for all Americans.  If we instead use white norms, (i.e. set the white mean and SD at 100 and 15 respectively), we find that  African Americans have a full-scale IQ of 83 (SD = 15.4), Hispanics score 87 (SD = 15.5), and Asian Americans average 103 (SD = 16.6).

 

Gossip Girl (2007 to 2012)

Pumpkin Person rating: 7.5/10

So after the absurd rumor that this show was about me, I decided to watch it.

The only thing this show has in common with me is that it’s narrated by a popular sassy blogger but while I blog about evolutionary psychology and horror, Gossip Girl blogs about teenagers attending an exclusive prep school on Manhattan’s Upper East Side.

This is the perfect show for Lion of the Blogosphere to blog about because it’s all about New York’s super upper class and the importance they place in getting into an Ivy League school, but not just any Ivy League school, the “holy trinity”:  Harvard, Princeton, and Yale.  Lion could simply stream the whole series on Netflix.

The show’s female villain, the manipulative scheming spoiled Blair Waldorf  is obsessed with getting into Yale and scores a 2200 out of 2400 on her SATs which by my formula equates to a Richard Nixon IQ of 145 (U.S. norms); 143 (white norms).

The show’s male villain, Chuck Bass (the son of a corrupt billionaire) simply hires some bookish boy to pretend to be him and write the SAT  with a fake ID in Chuck’s name (even though being black, the bookish boy looks nothing like Chuck)

Chuck does the same for his stepsister, the show’s star Serena van der Woodsen,  played by the extremely charismatic Blake Lively.

blake

Serena is too low on psychopathy to use the SAT score Chuck bought for her so decides to take the big bad test herself at a later date.  We’re never told how she scores, but Yale does send her a hand written letter asking her to visit the campus, but this has more to do with her celebrity status as a a New York socialite than her academic record.

We never see Gossip Girl but her mischievous voice narrates every episode with  oneliners such as “Lordy, Lordy, look who’s 40” when one of the teenage boys is dating a much older woman.

The show seems to be aimed primarily at teenagers, especially teenage girls, but since teenage girls sometimes watch TV with their moms, there are some subplots about the romances of the parents of the show’s teenagers.

“And who am I?” teases Gossip Girl at the start of every show.  “That’s one secret I’ll never tell”  Maybe the secret was revealed in the show’s final season but please place a spoiler alert if you’ve watched that far.

Each show ends with Gossip Girl saying “you know you love me, XOXO”

The autism paradox

Here’s a must-skim paper by Bernard Crespi about a paradox I myself noticed a long time ago.  A lot of autistic people have all the signs of  high IQ (intellectual interests, high IQ genetic variants, incredible talents, high social class, big heads, hyperlexia, photographic memory, calculating ability) yet are below average in IQ.

In an attempt to resolve the paradox, Crespi argues something like autism is unbalanced high intelligence.   But unbalanced high IQ usually leads to a lot of subtest scatter around an overall high composite score, it doesn’t lead to the below average or even mentally retarded IQs often found in autism.

Of course as commenter Swank noted,  autistics are often good at what they practice.  So if you have the genetic potential to have an IQ of 170, but spend all your time thinking about toothpicks, you might end up with a took pick IQ of 250 and IQ of 60 in everything else, and since toothpicks aren’t on the IQ test, your overall IQ will appear to be impaired,  even though it would actually be in the genius range if a high ceiling toothpick subtest could be added.

So perhaps autism isn’t so much a cognitive disability as an interest disability.  Autistics are born not being interested in people or their culture, and thus don’t develop the parts of the brain that our valued by society.  This assumes of course the neuro-plasticity model where cognitive abilities can be exercised like muscles which is true up to a point (though IQ tests try to include abilities no one’s had a chance to exercise because they’re so novel).

But why should autism be more common in high IQ or at least upper class families as some (though not all) studies suggest?  Perhaps because iin our high tech society, it helps to have enough non-social interests and abilities to acquire a lucrative high-tech job,  where you meet others with equally non-social interests.  But when two non-socials mate, they risk having a baby that is pathologically non-social, to the point where it doesn’t even care to learn basic social skills and language, and thus is considered autistic.

And of course if you don’t learn the language, the autistic will fail even the IQ test items he’s brilliant at because he wont understand the instructions.

Sex differences in IQ part II

I found a nice paper by Richard Lynn and Satoshi Kanazawa discussing sex differences in IQ.  The point of the paper is that females mature earlier than males so females are smarter in childhood, but after puberty males are smarter.  This fits Lion of the Blogosphere’s theory that puberty stunts certain parts of intelligence.  As to why females mature earlier, the authors speculate that perhaps females had to compete for mates during evolution, but males not so much.

Here are the scores of the males and females in a large UK sample:

lynnsex

Here are the tests they took with the respective g loadings:

lynnsex2

Interesting that reading and math skills should each have such high g loadings.  Maybe the SAT’s more g loaded than I think.

I’m sure the sex differences (after puberty) were much smaller than Lynn would have liked since he virtually pioneered the men are smarter than women theory, overturning a near century consensus that the sexes were equally intelligent.

Lynn might argue that the lack of spatial tests at age 16 underestimates the male advantage.  I would counter that the lack of social cognition tests overestimates the male advantage.  Perhaps just testing reading and math is a good compromise, since these were probably selected because they’re the most valued cognitive skills in modern society and not because they favor or don’t favor one sex or another.

Sex differences in IQ

The following shows how men and women compare on the latest edition of the Wechsler Adult Intelligence scales (Note: the index scores and IQ are expressed on a scale where the sex combined mean and standard deviation (SD) of Americans are 100 and 15 respectively, while the subtests use a scale where the mean and SD are 10 and 3 respectively):

sexdiff2

If we take this at face value, the best measure of intelligence is full-scale IQ and here we see men are a little smarter than women (mean IQ 101.2 vs 98.9) and a little more variable (SD = 15.3 vs SD = 14.6).

Unfortunately we can’t take this at face value because at least in earlier versions of the WAIS, there were attempts to eliminate sex differences by removing (or at least counterbalancing) items or even subtests that favored one sex over the other.

So how then can we find the true sex difference?

One partial solution is to look at content-free subtests like digit span and block design. Unlike a general knowledge subtest where you can arbitrarily select items that favor men (knowledge of sports) or women (knowledge of fashion), you can not select a series of digits or an abstract visual diagram that favors (or fails to favor) one sex or the other.  Thus content-free subtests are not amenable to arbitrary attempts to eliminate or create sex differences.

With the exception of the three subtests that makeup the Verbal Comprehension index (Vocabulary, Similarities, Information) all the WAIS-IV subtests are content-free.   Thus by averaging the remaining three indexes that makeup the Full-Scale IQ (Perceptual Reasoning, Working Memory, Processing Speed), we might get a less biased measure of the sexes.

When we do this, men average 100.4 (SD = 15.2) and women average 99.6 (SD = 14.5).  A trivial difference.  Indeed rounded to the nearest whole numbers, both sexes are 100 with an SD of 15.

Ironically by looking only at subtests where they couldn’t remove sex differences, I found even fewer sex differences!

Does this mean sex differences in IQ are virtually non-existent?  Not necessarily, since the subtests themselves may have been selected to have small or counterbalancing sex differences, even when sex differences on specific items within said subtests can not be removed.

Neanderthal brains & autism

Fascinating article by Jon Cohen at sciencemag.org:

Alysson Muotri, a geneticist at the University of California, San Diego (UCSD) School of Medicine, described his group’s Neanderthal organoids for the first time this month at a UCSD conference called Imagination and Human Evolution. His team has coaxed stem cells endowed with Neanderthal DNA into pea-size masses that mimic the cortex, the outer layer of real brains. Compared with cortical minibrains made with typical human cells, the Neanderthal organoids have a different shape and differences in their neuronal networks, including some that may have influenced the species’s ability to socialize. “We’re trying to recreate Neanderthal minds,” Muotri says….

Muotri focused on one of approximately 200 protein-coding genes that differ between Neanderthals and modern humans. Known as NOVA1, it plays a role in early brain development in modern humans and also is linked to autism and schizophrenia. Because it controls splicing of RNA from other genes, it likely helped produce more than 100 novel brain proteins in Neanderthals. Conveniently, just one DNA base pair differs between the Neanderthal gene and the modern human one

Muotri and his co-workers start with skin cells from a “neurotypical person”—someone without any known genetic defects linked to neurological disorders—and manipulate their genomes to turn them into pluripotent stem cells. Using CRISPR, the team then targets NOVA1 and swaps in the Neanderthal base pair to replace the modern human one. To avoid being misled by the “off-target” DNA changes made by CRISPR as well as genetic errors that can occur from producing the stem cells, they sequence the resulting cells and discard any that have unintended mutations.

It takes several months to grow the Neanderthal DNA–containing stem cells into organoids—”We call them Neanderoids,” Muotri says. Comparing them with modern human brain organoids made under identical conditions, his team found that the neuronal cells with the Neanderthalized NOVA1 migrate more quickly within an organoid as they form structures. “We think it’s related to the shape of the organoid, but we have no idea what it means,” says Muotri, noting that the Neanderoids have a “popcorn” shape, whereas modern human cortical organoids are spherical. The Neanderoid neurons also make fewer synaptic connections, creating what resembles an abnormal neuronal network.

Several of these differences mirror what Muotri has found studying neuronal development in the brains of children with autism. “I don’t want families to conclude that I’m comparing autistic kids to Neanderthals, but it’s an important observation,” says Muotri, who has a stepson with autism. “In modern humans, these types of changes are linked to defects in brain development that are needed for socialization. If we believe that’s one of our advantages over Neanderthals, it’s relevant.”

Muotri has developed the modern human brain organoids to the stage where his team can detect oscillating electrical signals within the balls of tissue. They are now wiring the organoids to robots that resemble crabs, hoping the organoids will learn to control the robots’ movements. Ultimately, Muotri wants to pit them against robots run by brain Neanderoids.

So it seems like a single base-pair change in the NOVA1 gene means the difference between popcorn brain and sphere brain.

popcorn

Could this be the single mutation Richard Klein claimed caused behavioral modernity to fully blossom some 50,000 years ago?  Perhaps before 50,000 years ago, both modern humans and Neanderthals had popcorn brain but after 50,000 years ago, modern humans suddenly mutated sphere brain, which allowed them to conquer Neanderthals and create representational art?  Seems too simple to be true.

I guess the way to test this would be to see how long ago the mutation occurred and whether there was a selective sweep for the human variant around 50,000 years ago.

However since the NOVA1 gene seems related to both autism and schizophrenia, my guess is that it has something to do with executive functioning,  the most elusive of cognitive abilities, and the impairment shared by both autistics and schizophrenics.

Richard Klein noted that before 50,000 years ago, humans were incapable of organizing their camp sites,  They would eat, sleep, cook and defecate everywhere.  After 50,000 years ago they suddenly started organizing campsites into cooking areas, eating areas, sleeping areas etc.    Perhaps this was the dawn of executive functioning?

Btw, this is not the first time Neanderthal DNA has been linked to autism.   Steve Hsu quoted scientists making the following claim:

Of particular interest is the modern human-specific duplication on 16p11.2 which encompasses the BOLA2 gene. This locus is the breakpoint of the 16p11.2 micro-deletion, which results in developmental delay, intellectual disability, and autism5,6. We genotyped the BOLA2 gene in 675 diverse human individuals sequenced to low coverage as part of the 1000 Genome Project Phase I7 to assess the population distribution of copy numbers in homo-sapiens (Figure S8.3). While both the Altai Neandertal and Denisova individual exhibit the ancestral diploid copy number as seen in all the non-human great apes, only a single human individual exhibits this diploid copy number state.

Open thread: Happy Friday the 13th!

[Please place off-topic comments in open threads like this one, and not in the main articles]

One of the ironies of the Friday the 13th movies is Jason is supposed to be mentally retarded, and yet he has the most creative kills of any slasher in the genre.  Indeed comparing the way Jason kills people to the way any pre-1980s slasher kills people is like comparing humans before and after 50 kya (the date when some scientists think behavioral modernity blossomed).

Before 50 kya, humans generally could only make artifacts out of only stones.  After 50 kya they suddenly started making them out of bone, antler, ivory and shell.

Similarly before 1980, U.S. slashers could only kill with knives, saws, and their hands.  After 1980, they started killing with axes, pitch forks, machetes, cork-screws.  You name it!

Of course the one killing weapon no self-respecting slasher will use is as a gun.  Watching Halloween IV with a large group of people, there was one scene where it looked like Michael Myers was about to shoot someone with a rifle.  People watching the movie were booing, moaning and hissing.  “C’mon man” complained one black guy.

But when the crowd realized Michael had other plans for the gun besides shooting it, the crowd went wild with approval.

The question is why don’t we want our slashers using guns?  Here’s my just-so story:  Guns have only existed for a short period of time, by contrast primates have been slabbed, slashed and hacked for millions of years, so there’s been far more time to evolve an innate fear towards these methods of violence, and so they better fit the scary atmosphere horror fans seek.

At the 47 minute mark in the below video (hat-tip to commenter Bruno), Richard Dawkins and Stephen Pinker talk about how what we fear has more to do with what was threatening in our prehistoric past than what is threatening to us today, implying these fears are not rational, but hardwired instincts.  I think most of us fear even harmless snakes because our ancestors spent millions of years being eaten by snakes in the Africa Savannah:

 

Preserving your brain might kill you, but it could it help you live forever

The talk about brain preservation in the comment section reminded me of this excellent discussion on CBC radio about preserving your brain long after your body is dead and the progress scientists are making to solve this problem.

Apparently, the only way to salvage your brain for posterity is for it to preserved while you’re still alive, thus killing you.  Waiting until after you’re dead will cause the chemical interactions that are your thoughts and memories to atrophy.

These scientists reject commenter RR’s belief that our minds can not be reduced to our brains.

Neurological variables correlate 0.63 with IQ?

I’m starting to feel a bit sorry for HBD deniers.  Their world is crumbling.
Brain scans from people sitting doing nothing explain 20% of the variation in IQ (hat-tip to Steve Hsu)
Press release:

In a new study, researchers from Caltech, Cedars-Sinai Medical Center, and the University of Salerno show that their new computing tool can predict a person’s intelligence from functional magnetic resonance imaging (fMRI) scans of their resting state brain activity. Functional MRI develops a map of brain activity by detecting changes in blood flow to specific brain regions. In other words, an individual’s intelligence can be gleaned from patterns of activity in their brain when they’re not doing or thinking anything in particular—no math problems, no vocabulary quizzes, no puzzles.

“We found if we just have people lie in the scanner and do nothing while we measure the pattern of activity in their brain, we can use the data to predict their intelligence,” says Ralph Adolphs (PhD ’92), Bren Professor of Psychology, Neuroscience, and Biology, and director and Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Caltech Brain Imaging Center.

To train their algorithm on the complex patterns of activity in the human brain, Adolphs and his team used data collected by the Human Connectome Project (HCP), a scientific endeavor funded by the National Institutes of Health (NIH) that seeks to improve understanding of the many connections in the human brain. Adolphs and his colleagues downloaded the brain scans and intelligence scores from almost 900 individuals who had participated in the HCP, fed these into their algorithm, and set it to work.

After processing the data, the team’s algorithm was able to predict intelligence at statistically significant levels across these 900 subjects, says Julien Dubois (PhD ’13), a postdoctoral fellow at Cedars-Sinai Medical Center. But there is a lot of room for improvement, he adds. The scans are coarse and noisy measures of what is actually happening in the brain, and a lot of potentially useful information is still being discarded.

“The information that we derive from the brain measurements can be used to account for about 20 percent of the variance in intelligence we observed in our subjects,” Dubois says. “We are doing very well, but we are still quite far from being able to match the results of hour-long intelligence tests, like the Wechsler Adult Intelligence Scale,”

Dubois also points out a sort of philosophical conundrum inherent in the work. “Since the algorithm is trained on intelligence scores to begin with, how do we know that the intelligence scores are correct?” The researchers addressed this issue by extracting a more precise estimate of intelligence across 10 different cognitive tasks that the subjects had taken, not only from an IQ test. …

Paper:

A distributed brain network predicts general intelligence from resting-state human neuroimaging data

Individual people differ in their ability to reason, solve problems, think abstractly, plan and learn. A reliable measure of this general ability, also known as intelligence, can be derived from scores across a diverse set of cognitive tasks. There is great interest in understanding the neural underpinnings of individual differences in intelligence, since it is the single best predictor of long-term life success, and since individual differences in a similar broad ability are found across animal species. The most replicated neural correlate of human intelligence to date is total brain volume. However, this coarse morphometric correlate gives no insights into mechanisms; it says little about function. Here we ask whether measurements of the activity of the resting brain (resting-state fMRI) might also carry information about intelligence. We used the final release of the Young Adult Human Connectome Project dataset (N=884 subjects after exclusions), providing a full hour of resting-state fMRI per subject; controlled for gender, age, and brain volume; and derived a reliable estimate of general intelligence from scores on multiple cognitive tasks. Using a cross-validated predictive framework, we predicted 20% of the variance in general intelligence in the sampled population from their resting-state fMRI data. Interestingly, no single anatomical structure or network was responsible or necessary for this prediction, which instead relied on redundant information distributed across the brain.

 

But what makes this all the more remarkable is that the study controlled for brain size.

As I’ve blogged about before, brain size itself is known to explain 16% of the variation in IQ (perhaps 20% when you control for gender as many studies don’t), and because brain size was controlled, any IQ variation explained by brain size is independent of the 20% variation explained by brain activity.

So does this mean that by scanning both brain size and brain activity, they can explain perhaps 40% of IQ variation (20% + 20%)?  If so a composite neurological score consisting of both brain size and brain activity would correlate 0.63 with IQ (the square root of 40% of the variance).

Of course none of this proves IQ is genetic, but what it may prove is that IQ is largely biological.

Or does it?

Here we get into the philosophically tricky distinction between culture and biology.  Arthur Jensen has stated that g (general intelligence) appears to be a wholly biological variable, not amenable to psychological manipulation.

But how do we interpret Jensen’s claim when all psychology is ultimately biological.  Even if one asserts that IQ tests measure only middle class knowledge, that in itself must leave a neurobiological imprint, as all learning does.  If so, machine learning should eventually be able to scan your brain to determine whether you took French class in high school or read Hamlet, should it not?

So if culture itself affects brain physiology, what does it even mean for g to not be amenable to psychological manipulation?  I think it means variation in g must be caused by biological variation (genes, nutrition etc) and not by cultural variation.  Overall brain size is probably not much influenced by culture (except for in extreme pathological cases) but I don’t know about brain activity.