With Fall now upon us, I wanted to post a clip from one of the greatest moments in horror history. The closet scene in John Carpenter’s classic 1978 film Halloween:
The original Halloween was an incredibly classy film. My hero, the late Roger Ebert, generally hated slasher films, but he thought Halloween was excellent and named it one of the 10 best films of 1978. Much of the film’s appeal comes from the stylish performance of Jamie Lee Curtis, who played the quiet nerdy all-American suburban babysitter Laurie Strode: The quintessential girl next door, who manages to keep her head together under pressure.
Laurie got much better grades than her slutty best friends who were killed off that Halloween night, but the real test of her intelligence was her ability to adapt. Despite the killer having every advantage (bigger, stronger, taller, a butcher knife), Laurie turns the situation around to her advantage, literally turning a close hanger into a weapon.
But for those who like newer movies, I recently saw Into the Forest. Not really a horror film at all (though there were some horrific scenes), though this was classified as horror by my cable company. I LOVE post-apocalyptic films like Cormac McCarthy’s The Road, and this was another of that ilk. There’s just something so incredibly cozy about the End of the World, especially when a few loved ones are forced to stick together and survive in a house in the Woods, as the rest of society crumbles. And of course I adore actress Ellen Page who gained fame in Juno. She just has a certain quality about her.
A bunch of us at work are extremely excited about the new Edward Snowden movie since it relates so closely to what we do everyday. Oliver Stone deserves great credit for telling this man’s story. As usual O’reilly doesn’t get it:
One of the biggest mysteries in psychology is the Flynn Effect; the fact that over the 20th century, people have been performing better and better on IQ tests. Of course, the average IQ in Western countries by definition is always about 100, however because people keep scoring higher every decade, the tests routinely have to be made more difficult and the norms must be regularly updated to keep the mean IQ from rising far above 100.
Now if this only happened on culturally loaded tests like General Knowledge and Vocabulary, we could simply conclude that the tests are just culturally biased against past generations who had less access to schooling and media. But some of the biggest gains have been found on tests like the Raven which were explicitly designed to be culture fair.
A sample item from the Raven Matrices. One must complete the pattern above using one of the missing pieces below. For decades this test was considered the gold standard of culture reduced testing.
How big is the Raven adult Flynn effect? About 30 IQ points per century in the Anglo-sphere.
Although some studies show incredibley large Raven gains, such as 7 points per decade, these have only been documented for relatively short intervals (30 years) and tend to be in countries with massive changes in nutrition (Holland). Looking at the Anglosphere, which has been the most important part of the World for the last few centuries, the gains appear to have been 3 points per decade over the 20th century.
In one study (see figure 2) the top 10% of British people born in 1877 (by definition those with IQ’s above 120 for their era) performed the same on the Raven as the bottom 5% of British people born in 1967 (by definition those with IQ’s below 75 for their era). In other words, performance on the Raven had increased by the equivalent of 45 points in less than a century! Of course it wasn’t a level playing field because those born in 1877 took the test when they were a somewhat elderly 65 while those born in 1967 took the test when they were young sharp 25 year olds, however Flynn cites longitudinal studies showing that Raven type reasoning declines by no more than 10 points by age 65. That still leaves us with 35 points to explain.
Another source of inaccuracy was that although the test was not timed for either group, those born in 1877 took the test supervised while those born in 1967 got to take the test home. This could potentially make a large difference; not necessarily because the unsupervised group would cheat, but because they would probably take more breaks since they were in the comfort of their homes. They would probably return to challenging items after they had time to relax and see those items from a fresh perspective, while those who took the test supervised in some strange room were probably more likely to rush through the tasks so they could go home. I would estimate that being allowed to take a test home improves test performance by about 5 IQ points on average, though this is just a guess.
But that still leaves a huge difference of 30 IQ points between 25-year-olds born in 1877 and 25-year-olds born in 1967. That gap is the Flynn effect.
Of course 25-year-olds born in 1967 are 49-years-old today. Have more recent cohorts of 25-year-olds continued to improve on Matrix reasoning problems? To answer this question I turn to the WAIS-IV, normed in 2006. On the Matrix reasoning norms from the WAIS-IV, there’s virtually no difference in performance between those born in 1967 and those born in 1981, despite the fact that the former were older when tested. This suggests the Raven Flynn effect might have slowed for post-1967 birth cohorts.
All together, it looks like Raven scores in the Anglo-sphere have increased by 30 IQ points in over a century (birth cohort 1877 to 1981).
How much of these gains can be explained by nutrition? About 10 IQ points.
The preponderance of evidence suggests that adult brain size has recently increased by about 1.3 SD per century in Europe and North America. A study of identical twins, where one is born malnourished, and the other born healthy, suggests that the malnourished twin’s verbal IQ is unscathed, but her Performance IQ and brain size are both equally stunted. Thus, if Victorians were 1.3 SD stunted in brain size, we might expect them to be 1.3 SD stunted in Performance IQ, but not at all stunted in verbal IQ. I hypothesise this is is probably because the brain evolved to prioritize verbal IQ when nutrients are limited, since humans can survive if they have the verbal IQ to access the group’s cultural knowledge; they don’t need the spatial IQ to reinvent the wheel.
Traditionally Performance IQ tests measured visual and spatial talent, and the ability to manipulate objects (jig-saw puzzles, blocks). By contrast the Raven, despite using the visual medium, emphasizes analogical thinking, not visual-motor abilities and spatial synthesis. Indeed Richard Lynn notes that although malnutrition stunts Raven ability more than it stunts verbal and memory abilities, Raven ability is preserved compared with more traditional Performance IQ tests like Block Design.
So if the Victorians were 1.3 SD stunted in brain size, and if nutritional damage to brain size perfectly matches nutritional damage to Performance IQ, while verbal IQ is completely preserved, with Raven IQ in between both extremes, we might guess that malnutrition stunted Victorian Raven scores by 0.65 SD (half of 1.3 SD and the equivalent of 10 IQ points).
How do we explain the remaining 20 IQ points?
To explain the remaining 20 IQ points of the Flynn effect, in the past I have invoked schooling and socio-economic changes. For example, it’s well known that attending high school, which most Victorians did not do, adds 10 points to your IQ score. And even Jensen admitted that being raised in a higher socio-economic environment (which Victorians also lacked) adds 10 points to IQ (the well-known adoption effect). Of course these cultural effects were thought to vanish by adulthood as genes become more important, but as the Dickens-Flynn model explained, that’s only true within generations, when genes and environment correlate, because genetically advantaged people create cultural advantages, causing cultural effects to become a mere extension of genetic effects.
Within generations, the boats that are naturally tallest, sit on the highest waves, so their extrinsic advantage (wave height) simply multiplies their intrinsic advantage (natural height) causing the latter to seem omnipotent. However between generations, cultural advances are like a rising tide that lifts all boats, as even genetic dullards today enjoy far more schooling and socio-ecomic advantages than many geniuses of past centuries.
But why do cultural advantages improve IQ scores, which are supposed to measure innate ability?
Merely saying that schooling and socio-economic advances improve IQ scores, doesn’t get us anywhere. The question is why. In his book, Does your family make you smarter?, scholar James Flynn seems to hint at two explanations: (1) The brain is like a muscle, and modern culture allows us to exercise it. (2) Modern culture causes us to apply logic to the hypothetical. Although Flynn (if I understood him correctly) seems to lump these two explanations together, I think they are better understood as separate hypotheses.
How much of the gains can be explained by exercising our brain like it were a muscle? Zero points.
The human brain is like a muscle. Our physical muscles develop in terms of the demands made, compare a weightlifter’s muscles with those of a swimmer. By 1940, most Americans were driving cars and this made new demands on their mapping-skills. These would be reflected in a larger hippocampus, the part of the brain that is the seat of map reading (for example, London taxi divers have very enlarged hippocampuses). Today we are getting automatic guidance systems and these skills will decline. This has nothing to do with better or worse genes but reflects whatever cognitive skills our society asks us to do.
This is an attractive argument, but there are three major problems with it.
1) The effects of cognitive training seem to have very little transfer. So practice navigating a car might, if you’re lucky, make you better at navigating by foot, but it’s unlikely to do much if anything for your other spatial abilities, like solving a jig-saw puzzle on WAIS IQ test. See Why knowledge & education can NOT make you smarter.
2) If the brain really were like a muscle, and the Flynn effect is largely caused by people getting more mental exercise, then we’d expect between generation brain size gains to increase from infancy to adulthood, just as if people today were lifting more weights than Victorians were, we’d expect the inter-generation muscle size gains to increase from infancy (when people don’t lift weight) to adulthood (when years of weight-lifting accumulate).
Instead it’s just the opposite. Scholar Richard Lynn reports huge gains in head circumference in British one-year-olds and British seven-year-olds (1.5 cm and 2 cm in 50 years and given that the SD for head circumference among whites at both ages is 1.5 cm (see table 1 of this paper), that implies a head circumference increase of 2 SD and 2.67 SD per century respectively). However by adulthood, there’s no evidence that head size or brain size has increased by more than 1.3 SD per century in the Western World, and even that might be an overestimate.
3) Lastly, if a large chunk of the Flynn effect really was analogous to newer generations building more muscle, than the gains made from developing cognitive skills would have real World consequences, just as building real muscle has real consequences in terms of strength performance. If staying in school longer was really causing us to exercise our brain’s “abstract reasoning muscles” as measured by tests like the Raven, then shouldn’t we expect even more scientific breakthroughs in abstract fields like math, science, and philosophy? Indeed James Flynn was perhaps the first to state that if the Flynn effect reflected mostly real gains in intelligence, we’d expect “a cultural renaissance too great to be overlooked”. So why now does Flynn compare these mysterious IQ gains to the very real strength gains weight lifters experience? Perhaps scholar Stephen Pinker convinced him we are experiencing such a renaissance?
However according to scholar Charles Murray, human accomplishment has actually declined from 1850 to 1950, and declined even more post-1950. And in the book The Genius Famine, scholars Edward Dutton and Bruce Charlton also argue that “Genius” level achievements are declining. But to the extent that they have not declined (as Pinker argues), this can more than be explained by 1) a 10 IQ point increase in real intelligence caused by nutrition, 2) an increase in mass education, 3) greater population meaning more talent to draw from, and 4) building on the accomplishments of our ancestors. There is simply not enough modern accomplishment leftover to explain, once you factor in these four other factors.
How much can be explained by hypothetical thinking? Perhaps 10 points.
Luria looked at peoplejust before they entered the scientific age,and he found that these peoplewere resistant to classifying the concrete world.They wanted to break it upinto little bits that they could use.He found that they were resistantto deducing the hypothetical,to speculating about what might be,and he found finally that they didn’t deal wellwith abstractions or using logic on those abstractions.
Now let me give you a sample of some of his interviews.He talked to the head man of a personin rural Russia.They’d only had, as people had in 1900,about four years of schooling.And he asked that particular person,what do crows and fish have in common?And the fellow said, “Absolutely nothing.You know, I can eat a fish. I can’t eat a crow.A crow can peck at a fish.A fish can’t do anything to a crow.”And Luria said, “But aren’t they both animals?”And he said, “Of course not.One’s a fish.The other is a bird.”And he was interested, effectively,in what he could do with those concrete objects.
And then Luria went to another person,and he said to them,“There are no camels in Germany.Hamburg is a city in Germany.Are there camels in Hamburg?”And the fellow said,“Well, if it’s large enough, there ought to be camels there.”And Luria said, “But what do my words imply?”And he said, “Well, maybe it’s a small village,and there’s no room for camels.”In other words, he was unwilling to treat thisas anything but a concrete problem,and he was used to camels being in villages,and he was quite unable to use the hypothetical,to ask himself what if there were no camels in Germany.
It seems to be that pre-modern people simply didn’t understand the basic rules of taking tests. You must assume that whatever information the tester gives you is true, and you must be willing to take it seriously.
They lived in a World where life depended on solving actual problems, so they couldn’t relate to tests that required them to solve imaginary problems, just to prove they had problem solving ability. But those of us who have been socialized by decades of schooling and educated parents, are quite used to imaginary problems and are quite willing to take them seriously.
But I would call this mere test sophistication. I would not say that training people to solve hypothetical problems has increased real intelligence, because real intelligence, by definition, is the ability to solve real problems. Problems that are not real, are technically not even problems.
Of course to measure one’s ability to solve all types of problems, test makers must create hypothetical problems, but if a test-taker can’t interpret hypothetical situations as actual problems, then he is not necessarily lacking in intelligence, but rather is untestable via hypothetical questions. Such a person could only be tested if we made those hypothetical problems real, like we do when we test animals. We don’t ask a monkey how he would use the bamboo sticks to get the banana, we deny him the banana until he figures out how to get it. We make the hypothetical real, since it’s the only way he’ll take the test.
Scholars mired in the dogma that “real” intelligence cannot increase over time dismiss them as mere skill gains acquired by better education. This is self-defeating. The genetic limitations of our brains were supposed to tell us who was capable of profiting from education. Nothing was more evident to the elite of 1900 that that the masses could never be trained to assume the demanding cognitive roles the elite monopolised at that time. Well, the entire modern world has proved them wrong.
Intellectual progress has brought moral progress. Among school-demanded skills is applying logic to generalised statements and taking the hypothetical seriously. People of the Victorian era saws moral maxims as concrete things, no more subject to logic than any other concrete thing. Unlike us – people of the late 20th and early 21st century educated within an analytic scientific tradition – they would not see hypotheticals as universal criteria to be generalised.
Flynn seems unwilling to make a distinction between a mental skill and intelligence. I think a narrow mental ability is just a skill or a talent. A broader one is intelligence, or at least a major part of intelligence. Clearly the ability to cope with the hypothetical transfers to several different kinds of cognitive tests, so it may at first glance appear to have broad transfer, and thus great adaptive value.
But then we must remember that tests, by definition, are hypothetical problems, so of course an ability to adapt to hypotheticals will enhance hypothetical problem solving, but that tells us nothing about its value to real problem solving.
Perhaps it has made us more intelligent in the cocktail party sense of having deep philosophical or moral views, but in terms of solving actual novel problems, I doubt it’s done much. Why the skepticism? Because we already know nutrition raised real intelligence by 10 points since the Victorian era, and our real world accomplishments are not impressive enough to add any more points to our real intelligence, given all the other advantages of modernity (large population of talent to learn from the past).
But I do agree with Flynn, that hypothetical problem solving is a major cause of the Flynn effect, perhaps equivalent to nutrition (10 points). But I would consider it a learned skill, or trick of the test taking trade, rather than a raw ability that was improved through mental exercise.
Another quote from James Flynn’s TED talk:
My father was born in 1885,and he was mildly racially biased.As an Irishman, he hated the English so muchhe didn’t have much emotion for anyone else.(Laughter)But he did have a sense that black people were inferior.And when we said to our parents and grandparents,“How would you feel if tomorrow morning you woke up black?”they said that is the dumbest thing you’ve ever said.Who have you ever known who woke up in the morning —(Laughter) —that turned black?
In other words, they were fixed in the concretemores and attitudes they had inherited.They would not take the hypothetical seriously,and without the hypothetical,it’s very difficult to get moral argument off the ground.You have to say, imagine you werein Iran, and imagine that your relativesall suffered from collateral damageeven though they had done no wrong.How would you feel about that?And if someone of the older generation says,well, our government takes care of us,and it’s up to their government to take care of them,they’re just not willing to take the hypothetical seriously.Or take an Islamic father whose daughter has been raped,and he feels he’s honor-bound to kill her.Well, he’s treating his moresas if they were sticks and stones and rocks that he had inherited,and they’re unmovable in any way by logic.They’re just inherited mores.Today we would say something like,well, imagine you were knocked unconscious and sodomized.Would you deserve to be killed?And he would say, well that’s not in the Koran.That’s not one of the principles I’ve got.Well you, today, universalize your principles.You state them as abstractions and you use logic on them.If you have a principle such as,people shouldn’t suffer unless they’re guilty of something,then to exclude black peopleyou’ve got to make exceptions, don’t you?You have to say, well, blackness of skin,you couldn’t suffer just for that.It must be that blacks are somehow tainted.And then we can bring empirical evidence to bear, can’t we,and say, well how can you consider all blacks taintedwhen St. Augustine was black and Thomas Sowell is black.And you can get moral argument off the ground, then,because you’re not treating moral principles as concrete entities.You’re treating them as universals,to be rendered consistent by logic.
Flynn correctly cites the racism of past generations as evidence of poor reasoning, and yet, as I noted above, he also claimed past generations struggled with generalizing and categorizing (“What do crows and fish have in common?”) and hypothetical syllogisms (“There are no camels in Germany.Hamburg is a city in Germany.Are there camels in Hamburg?”), but what is racism if not the tendency to generalize, categorize and use syllogisms.
To be a racist, you must be good at recognizing who is black, which requires an ability to see common facial, colour and hair traits. It also requires the ability to think syllogistically: “Our new neighbor is black. I don’t like blacks. Therefore, I don’t like our new neighbor”
So clearly, Victorians had the ability to think in these ways, but they could not, or would not, apply that thinking to the hypothetical problems posed on tests or in abstract discussions.
This makes perfect sense. Intelligence evolved to enable us to adapt, to take whatever situation we’re in, and turn it around to our advantage. Thus we are genetically predisposed to use our intelligence to solve practical problems; problems that are actually problems, not the make-believe problems of the Raven.
It is a testament to the decadence of modernity that we have few real problems to solve, so we’re motivated to solve imaginary problems, unlike our ancestors who “would not take the hypothetical seriously” in Flynn’s words.
Note, that even Flynn himself says “would not”, not “could not”. This raises the question, is hypothetical thinking is even a skill, as opposed to merely a motivation? I’ll tentatively assume the former, and consider motivation effects separately.
How much of the gains can be explained by motivation? About 10 points.
On tests like the Raven Progressive Matrices, where focus, persistence and concentration is required, it always seemed like common sense to me that motivation was a major factor.
Particularly in samples where education and socio-economic status is low (as was the case with Victorians), tests like the SPM (Standard Progressive Matrices) and CPM (Colored Progressive Matrices) can be very annoying indeed. Scholar J.P. Rushton et al, reported on giving these tests to the Roma:
Most Roma found the tasks very difficult; some complained of getting a “headache.” They typically asked to stop the test before 30 min. After completing and analyzing 231 sets of scores on the SPM, it was decided to switch to the CPM. The remaining 92 subjects were administered the CPM. Although test-takers seemed to enjoy this version more, they continued to report the task was difficult and gave them a headache.
Some tests require too much focus, causing folks to get a headache or frustrated
Motivation is a very likely explanation for the Flynn effect because Victorians were used to the outdoors, chopping wood and riding horses in the fresh air. Sitting in an office at a desk concentrating on Raven puzzles for an hour must have been most painful indeed. By contrast, modern people have typically spent 13 years in school and work in white collar jobs. We’re used to sitting still and concentrating and are intrinsically motivated to prove we’re smart on standardized tests. We are also more likely to take tests seriously and find them interesting.
The effects of motivation on IQ scores are acute. A “meta-analysis of random-assignment experiments testing the effects of material incentives on intelligence-test performance on a collective 2,008 participants. Incentives increased IQ scores by an average of 0.64 SD, with larger effects for individuals with lower baseline IQ scores.” 0.64 SD equates to 10 IQ points. Further, large incentives produced IQ gains of 1.63 SD (24 IQ points!).
Of course, it’s not as though any extrinsic reward has made modern people more motivated on IQ tests than Victorians were, but growing up with more schooling and socio-economic advantage likely produced a culture where people are more intrinsically motivated to do well on mental tests. This could easily explain 10 points of the Flynn effect, particularly on tests like the Raven that require focused effort.
In the Anglo-sphere, Raven IQ has increased by the equivalent of 30 IQ points since the 19th century. I believe there are three major causes of this increase. (1)Prenatal and perinatal nutrition (including disease reduction) which has also substantially increased brain size. (2) the ability and/or willingness to take hypothetical problems seriously, and (3) the motivation to sit still, focus, persist, and concentrate on boring tests. Each of these factors likely explains about a third of the Raven Flynn effect, though in my view, only the first third (nutrition) should be considered an increase in real intelligence (the mental ability to solve any problem).
While James Flynn correctly asserts that the brain is like a muscle and can get bigger in response to cognitive exercise, most cognitive exercise has extremely narrow effects, and the fact that 20th century brain size gains were largest in early childhood, suggest they are immutable early-life nutritional gains, not the result of decades of mental stimulation.
The real lesson of the Flynn effect is that the Raven Progressive Matrices is NOT a culture reduced test. If culture reduced testing is to continue in the future, we’ll need tests that don’t require hypothetical abstractions, and are also fun and engaging enough to not require persistent motivation. I recommend tests like Digit Span (which shows virtually zero Flynn effect) and Block Design (whose Flynn effect in adults can be 100% explained by the effects of prenatal nutrition on Performance IQ). A properly weighted composite of both tests could have a g loading of 0.8+. Identifying a culture reduced measure of verbal ability remains an interesting challenge.
Trump is not shy about his intellectual prowess. As he tweeted in 2013: “Sorry losers and haters, but my I.Q. is one of the highest -and you all know it! Please don’t feel so stupid or insecure, it’s not your fault.”
Of course, “smart” is a bit subjective. There’s book smarts as well as street smarts. Many would say Trump has run a pretty smart campaign. But clearly he’s saying that his brain is very sharp — as he puts it, “super-genius stuff.’’ At one point, Trump rebutted criticism from columnist George Will and GOP consultant Karl Rove by saying: “I’m much smarter than them. I think I have a much higher IQ. I think I went to a better college — better everything.”
Trump’s college background, in fact, is often his key piece of evidence for his intellectual superiority. But there’s less here than meets the eye. Trump did graduate from the Wharton School of business at the University of Pennsylvania, an Ivy League college. But Trump did not get an MBA from Wharton; he has a much less prestigious undergraduate degree. He was a transfer student who arrived at Wharton after two years at Fordham University, which U.S. News & World Report currently ranks 66th among national universities. (Besides, simply going to an Ivy League school doesn’t prove you’re a genius.)
Gwenda Blair, in her 2001 book “The Trumps,” said that Trump’s grades at Fordham were just “respectable” and that he got into Wharton mainly because he had an interview with an admissions officer who had been a high school classmate of his older brother. And Wharton’s admissions team surely knew that Trump was from one of New York’s wealthiest families.
The average SAT score at Fordham University (reading + math) is 1260 (post-1995 scale). This equates to an IQ of about 123 (white norms) and there’s no reason to think it’d be much different in Trump’s day. But since Fordham students are selected by SAT scores, we’d expect them to regress to the U.S. mean (about 98 in Trump’s day) on an IQ test for which they were not selected.
I estimate that in the general U.S. population, the SAT correlates between 0.53 and 0.74 with the WAIS IQ test, so Fordham students who were 25 points above the U.S. mean of 98, would regress to anywhere from 53% to 74% of 25 points above 98, so either a mean IQ of 113 to 117.
The fact that Trump had “respectable grades” suggests he was as least as smart as the average Fordham student, but perhaps not much smarter, so IQ 113 to 117.
But if Trump can hold his own in the debate against Hillary that starts in less than an hour, then I may have gravely underestimated him. For national merit finalist Hillary has an IQ of at least 140 as measured by the PSAT (though probably no higher than 140 since she flunked the bar exam).
If Trump’s IQ is indeed only 94, this will be the greatest IQ mismatch in the history of presidential debates, and Trump is at risk for massive humiliation. However if his IQ is 113 to 117, as his grades suggest, then he might do fine. JFK (IQ 119) more than held his own against Nixon (IQ 143), and Reagan more than held his own against Carter despite perhaps an even bigger IQ gap.
I watched the recent PBS documentary on The Mamas and the Papas and they must be right up there with The Beatles as one of the greatest bands of all time. What an incredible collection of music for a group that was only together for a couple of years.
It’s amazing how young they looked in the 1960s. I had a professor who argued people didn’t get old with age, it just appears that way because older cohorts were always wrinkled and grey.
The genius of the group was probably John Phillips (more evidence that height and IQ are correlated) for writing those catchy tunes, but watching the documentary, it was Mamma Cass that stole my heart.
As a fat Jewish girl during a more anti-Semitic era, she must have wanted so desperately to fit in with these cool white kids, and I’m so glad they accepted her, as did America. Cass’s incredible voice added real value to the culture, as did her creative body language. I love when she introduces the song Creeque Alley and says “cue the tape.” Watch how she opens her moth wide and starts bouncing up and down in excitement for the song about to be played (Very Oprah!).
One of the things we learned in the documentary is originally, Cass did not even want to appear on stage with the slim blond blue eyed Michelle Phillips and would only sing from the back of the room. What an ego boost that must have been to Michelle, “you’re so beautiful, I feel ugly being seen with you.” Did Cass really have such low self-esteem, or was she a master manipulator who knew exactly how to make others like her?
After The Mammas and the Pappas broke up, Mamma Cass started a solo career, and probably wondered, who’s going to turn out to see a fat Jewish girl sing solo? The show sold out, two nights in a row! She was so excited she phoned Michelle Phillips to share the happy news. That night she died of a heart attack in her sleep at the age of only 32.
“I do know one thing” said Michelle Phillips. “Cass Elliot died a very happy woman”.
At that point I was so overwhelmed with emotion I had to turn the documentary off.
I haven’t forgotten about reader requests to estimate the IQs of Eminem, Ben Shapiro, the Founding Fathers or even Clark Ashton Smith, but with Halloween only weeks away, I wanted to write a quick post about GG Allin while he’s still fresh in my memory (hat-tip to commenter jeanbedelbokassa for suggesting this person).
Identified with serial killers
In studying GG Allin over the past week or so, I was struck by his claim that if he hadn’t been an entertainer, he would have been a serial killer and was reportedly friends with John Wayne Gacy. Of course I should stress that I’m in no way equating an artist like GG to an actual killer, but there are aspects of GG’s childhood and adult persona that remind me of some of the most grotesque murderers in recent history.
When heavy metal director and GG Allin fan Rob Zombie remade John Carpenter’s classic Halloween, he angered fans of the original by turning villain Michael Myers from a quiet suburban boy, into a white trash sexually ambiguous bullied heavy metal fan, but Zombie based the character on real life cases.
In an interview with Vanity Fair, Zombie explained what he knew about serial killers:
I love ’em all. Not, you know, as people or anything, but they all make for great stories. I think Henry Lee Lucas is probably one of my favorites.
He and his buddy Ottis Toole were just a couple of deranged rednecks. But given his upbringing, y’know, it’s just not that surprising. Some of these guys, you think, “What would make a person do something like this?” And then you read about their upbringing and you’re like, “Oh, okay, well I guess that might do it.”
He also said:
I’ve read so many books about these guys, I start confusing their backstories. But with Henry and Ottis, I remember it was pretty horrible. Stripper moms, alcoholic dads, I think they were both forced to dress up like girls at some point. Henry killed his mom and raped her corpse, and Ottis had a thing for arson and cannibalism. They were into some really perverted stuff, like having sex with dead animals and that kinda thing.
Compare these descriptions with GG Allen, who was raised in the woods without running water or electricity, with a father so mentally ill, he reportedly named his son Jesus, because he had a vision of Jesus asking him to (the name got changed to GG). In school GG was bullied by other kids, and by high school would start dressing like a girl (shocking for the time period).
Like Lucas and Toole, GG would engage in the most sexually perverse of behavior such as self-reportedly raping people on stage (never confirmed), defecating on stage and throwing it at the audience, and shoving bananas into the most disgusting parts of his body and then eating them. In preparing for this post, I watched a documentary about him and it was so disturbing, I couldn’t even look at the screen and felt physically nauseous for hours afterwards.
In other words, GG, who says he would have been a serial killer, is very much of that ilk, both behaviorally and biographically, and the most depraved of serial killers, seem to average IQ 80. Note, I’m not talking about all serial killers, just the really perverted ones, since the same prenatal brain damage that causes paraphilias, also tends to lower IQ.
And yet GG did not become a killer, but rather an extremely successful man, who at his peak, claimed to have a million followers. Such fame and adulation, surely made him more powerful than many city mayors, though not quite as powerful as a U.S. congressman, governor, or national media personality.
Demographic prediction: Statistically expected IQ of a supremely powerful would-be serial killer: IQ 104
Let’s say GG was three standard deviations more powerful than the average white American on a normalized curve, which would make him about 4 SD more power than the average perverted would-be serial killer, since (would-be) serial killers are of low-middle socio-economic status.
If there were a perfect correlation between IQ and power, we’d expect GG’s IQ to be 60 points above the would-be depraved serial killer mean of 80,but since the correlation between IQ and power only seems to be about 0.4, his IQ would likely be only 40% so extreme. For example, U.S. presidents are only about 40% as far above the mean in IQ as they are in power.
So instead of GG’s IQ being 60 points above 80, he’d likely be about 24 points above 80, or IQ 104. This is consistent with a psychologist who interviewed him in prison and found his intelligence to be at least average (hat-tip to the omniscient ruhkukah).
Historiometric scholastic IQ
Because the demographic prediction made above has such a large standard error, it must be substantiated by historiometric evidence to be at all credible. Although historiometric IQs have been ridiculed, when competently done, they are perhaps one of the most important concepts in social science.
When I first read that GG had repeated the third grade, I didn’t give it much weight because judging by his adult persona, I assume he was a rebellious kid who constantly misbehaved. When interviewed, his mother confirmed that young GG simply didn’t care about school, but what she also said was that it was her decision to have him repeat the third grade because he was getting so far behind the other kids.
This implies that had GG taken a scholastic achievement test at age nine, he probably would have scored like an average white eight-year-old (since he was a year behind), thus implying a ratio IQ equivalent of 89 (8/9 = 0.89). This is considerably lower than the crude demographic prediction of 104 (white norms).
Historiometric Draw-a Man-IQ
The website ggallin.com claims to have original art work drawn by GG. This was an excellent opportunity for me to score GG on Florence Goodenough’s legendary Draw-A-Man IQ test, which was revised by Dale Harris in 1963. The test aimed not to measure artistic talent, but basic knowledge of the human body, which serves as a rough and ready measure of intelligence.
Original drawing by GG Allin
Applying the painstaking scoring procedure of the Goodenough-Harris Drawing Test to the above picture, I found that of the 73 items in the 1963 manual, GG failed 13 (items #6,7,8,12,13,16,21,25,37,49,60,61,69), for a raw score of 60/73.
The smoothed mean and standard deviation for American fifteen-year-olds (considered roughly adult level on tests like this) was 45.2 and 9.83 respectively, making GG’s score 1.51 SD above the mean (IQ 123, U.S. norms; about 122, white norms). However because the norms were from fifteen-year-olds circa 1963, they belonged to a birth cohort eight years older than GG’s.
According to scholar Richard Lynn, scores on the Draw-A-Man test become inflated at a rate of 3 points per decade (see The Flynn effect), which means GG would have scored 2.4 points lower if compared to his actual birth cohort, so we must reduce his IQ on this test to about 120 (white norms).
Historiometric Composite IQ: 105
Historiometric evidence suggests GG would have scored an IQ equivalent of 89 on a childhood scholastic achievement test, and when his drawing was scored on The Draw-A-Man test, he clocked in at IQ 120.
The Draw-A-Man test correlates between 0.48 to 0.72 with the Stanford Binet (Harris, 1963, pg 96). Assuming it also correlates about 0.6 with measures of scholastic achievement, which are essentially IQ tests in populations with similar schooling, then someone with a scholastic IQ of 88, and a Draw-A-Man IQ of 120, would have a composite IQ of 105 (white norms). This is consistent with both the demographic prediction and with the opinion of the psychologist who interviewed him, however given the speculative nature of historiometrics, combined with the mediocre validity of the Draw-A-Man IQ test, this should only be considered a crude estimate.
So I turned on Comedy Central’s Rob Lowe roast, expecting to see Lowe get roasted, but instead all of the jokes were aimed at Ann Coulter, who for some strange reason was attending.
I was watching with a woman who hated Ann Coulter, but by the end of the evening she felt sorry for her.
Coulter seemed to be making the same mistake her hero Trump made when he was roasted by President Obama: She didn’t laugh!
By laughing, you make yourself in on the joke, but by just sitting there smiling, staring nervously, you are merely the butt of the joke. At one point one of the comics mocking Ann, turned to her and said something like “don’t stare at me with that freaky bitch face,” causing the audience to explode into more cruel laughter at Ann.
When Peton Peyton Manning was mercilessly mocked for his freakishly huge head, Manning’s brain size gave him the intelligence to adapt, by slapping his knees laughing, but Ann just sat there all night with that creepy smile.
Knowing Ann’s obsession with Ivy League elitism, she was probably thinking the whole thing was beneath her. Who cares if these low SAT score state college losers mock me, she probably thought.
When it was finally Ann’s turn to take the stage, you expected her to turn the tables on her tormenters, but sadly her one liners seemed to bomb, and she was booed by the audience and heckled by the comics.
Because of Coulter’s anti-immigration views, many of the loudest laughs at Coulter’s expense came from Hispanics in the audience. This is yet another example of ethnic genetic interests.
Part of the problem was probably Coulter’s arrogance. The comedy writers had given her material, but she probably arrogantly thought “I don’t need low SAT scores state school losers to write jokes for me. I will write my own jokes.”
Although she did have one great line: ““Welcome to the Ann Coulter Roast with Rob Lowe”
But one of the advantages of being an Ivy League lawyer, is no matter how badly you screw up, people assume it must have been part of your brilliant master plan. Some are suggesting that she deliberately bombed as a way of getting more publicity.
In honor of Labor Day, I wanted to write a quick post on Marxism. I’m not anti-Marxist; in fact I’ve endorsed ONLY Marxists for President of the United States on this blog (Bernie Sanders and Jill Stein), and given the U.S. Supreme Court’s ridiculous Citizen’s United ruling, the U.S. needs Marxists now more than ever. But even though Marxists probably tend to be quite smart given the correlation between IQ and liberalism and the difficulty of reading Marx, there are two ways in which Marxists seem clueless.
Marxists assume that enormous economic inequality is in and of itself, proof that the market is rigged. This ignores the fact that there is enormous inequality in human productivity. For example, a member of Prometheus brilliantly noted that because the human mind operates in parallel, complex learning and problem solving speed doubles every 5 or 10 IQ points. What that means is that is that in complex jobs, we should expect an IQ 170 to be up to 15,625 times more productive than an IQ 100. Further, if the IQ 170 is ten times more motivated than the IQ 100, he becomes perhaps 156,250 times more productive than the IQ 100.
The other factor that Marxists don’t seem to get is the role of technology in creating enormous inequality. In the distant past, a writer would take years to write only one book, which would severely diminish his productivity, but with the advent of the printing press, writers can produce MILLIONS of copies of their books. So you have this huge divide between those whose work can be multiplied a million fold by technology, and those whose work can only be done once per unit of effort. This divide seems most unfair when we compare dumb athletes making millions entertaining sports fans to brilliant doctors who make only six figures saving lives.
But what people don’t get is that a brilliant doctor, will save maybe five lives a year, while thanks to television, the dumb athlete is entertaining TENS OF MILLIONS of people a year: a trivial service multiplied by tens of millions is indeed worth more than a valuable service for only five people. So in a very objective sense, the dumb athlete deserves more money than the brilliant doctor.
When you combine the fact that complex problem solving speed doubles every 5 -10 IQ points, and then gets multiplied by differences in motivation and the use of technologies like the printing press, we should expect unbelievably large differences in wealth and income between the rich and the poor, even if everyone were playing fair (which they’re not). Yes the system is rigged, but mere inequality doesn’t prove anything; a truly fair system might result in even more inequality!
But at the same time, the athlete did not invent the television and the writer did not invent the printing press, nor does he enforce the arbitrary intellectual property laws that allow him to monopolize all the profits from reprints of his work. All success is the product of both the individual and the society in which he lives, which is why I don’t object to a 50% tax rate for all who can afford it. In theory I would even support a 50% tax rate on investment income, but that’s stupid because the government actually collects more tax dollars when they keep the capital gains tax low because more rich people then invest.
I also support a 50% inheritance tax. Some object to this because they’ve already been taxed 50% on their income, so everything they have left at death should be tax free. However I don’t see the inheritance tax as a tax on the dead, I see it as a tax on the person who inherits the money. If the tax rate on earning a million dollars in 2016 is for example 50%, why should the person who didn’t even earn his million in 2016, but was given it because his father died, be spared that 50% tax rate?
What should be done with all those tax dollars? Above all, I support Charles Murray’s idea of a negative tax for the relatively poor that would replace the welfare state and income transfer programs like social security, Medicare, Medicaid, Obamacare etc, (though a certain percent would have to be earmarked for health costs as Murray was reluctantly pursuaded).
I do NOT support a minimum wage. If a consenting adult is willing to work for less than a penny an hour, the government has no right to prevent it. A minimum wage unfairly places the burden of helping the poor on job creators, rather than distributing it equally among all tax payers, and with a negative tax for the relatively poor, it becomes redundant.
It also destroys jobs. Cashiers are being replaced by automated checkout machines and McDonalds has introduced automated ordering machines, though they say the new gourmet burger you can order on them create new jobs for chefs.
Many non-scientists have a great interest in heritability, but lack the science education and/or cognitive ability to understand modern techniques like Genome-wide Complex Trait Analysis (GCTA), so this post is a quick attempt to explain it. Full Disclosure: I have virtually no formal science training beyond high school but this is just an oversimplified explanation.
GCTA gives a measure of the squared correlation between additive genotype and phenotype. The reason it’s so confusing is that you can’t directly correlate a phenotype with a genotype if you haven’t found the genes that code for that phenotype, and thus you can’t determine if someone is genetically high on a given trait.
So for example, you can’t determine if someone’s genetic IQ matches their actual IQ, if you don’t know if they have the genes for IQ. Since a correlation, by definition, is how close the rank order of two variables (i.e. genetic IQ and actual IQ) agree, it can’t be directly calculated if one of said variables (i.e. genetic IQ) can’t be ranked. It would be like trying to calculate the correlation between height and weight, but all the weights were reported in a language you didn’t speak.
To sidestep this problem, GCTA was invented by a scientist of East Asian heritage. In GCTA, instead of ranking everyone in your sample from highest to lowest on each trait, you simply randomly assign people to pairs, and for each pair, calculate the genetic distance and the phenotype distance. So for example, if the people who differ by 100 single nucleotide polymorphisms (SNPs), on average, differ by one standard deviation in IQ, and if people who differ by one standard deviation in IQ differ, on average, by 39 SNPs, then perhaps it can be inferred that (in this sample) the correlation between genetic IQ and actual IQ is whatever number when squared and multiplied by 100, equals 39.
That number is 0.62
This is because in a bivariate normal distribution, the slope of the standardized regression line equals the correlation between two variables, so if a genetic difference of 100 SNPs regresses to a one standard deviation difference in IQ, then one standard deviation must be only 62% as extreme as 100 SNPs and if a one standard deviation difference in IQ regresses to a 39 SNP difference, then 39 must be only 62% as extreme as one standard deviation.
Once we have the correlation of say 0.62 between additive genotype and phenotype , we square it to get the amount of variation explained which in this example would be 0.38 (the real number is probably much higher, and even higher still for broad-sense heritability).
Of course what very few people realize is that heritability is technically NOT the percentage of the phenotypic variation explained by genes, it’s the percentage explained by genes when environment is held constant or allowed to vary randomly.
Recently commenter Trumpocalypse (aka Mugabe), also commented on GCTA:
if it could be shown that the joint distribution, f, of some measure of genetic distance between individuals, d, and phenotypic difference, p, has a linear conditional mean (that is, f(d|p) is a straight line), then it would be possible to avoid all the problems of twin studies and the shared womb of twins…
the conditional mean could simply be extrapolated to d = 0. and from the mean difference, p, at d = 0, the heritability could be calculated easily.
furthermore the perfect-ness of the measure of genetic distance would be irrelevant. the joint probability density function would “include” this error. so with a large enough sample the extrapolation would have very little error.
this is essentially what GCTA is.
but whether the joint distribution is bivariate normal or some other distribution with linear conditional mean…i don’t know.
Here is some genetic distance data on nine major human populations:
According to Richard Lynn’s controversial meta-analysis of these nine genetic clusters, Africans average IQ 67, Non-European Caucasoids average 84, European Caucasoids average 99, Northeast Asians average 105, Arctic Northeast Asians average 91, Amerindians average 86, Southeast Asians average 87, Pacific Islanders average 85, and New Guineans and Australians average 62.
Although this is group data, not individual data, it would be interesting to compare group genetic distance to group IQ difference. If for example, I knew both the average IQ difference between two random humans from anywhere in the World, and if I also knew the Fst distance between two random individuals from anywhere in the World, I think I could probably use my crude understanding of GCTA, to estimate the IQ phenotype-genotype individual level correlation of the entire human species from Sforza’s and Lynn’s group level data. Squaring this correlation might be a good proxy for IQ’s Worldwide heritability.
On the other hand, the fact that Sforza intentionally tried to use non-selected genes to calculate genetic distance (since these mutate at a regular rate creating a reliable molecular clock for splitting off dates) might make the exercise pointless.