Please post all off-topic comments for the week in this thread. They will not be post in the main articles.
I once saw a clip from a very old episode of the soap opera The Young and the Restless, that was so powerful, I never forgot it.
A greedy landlord planned to bulldoze some low income apartments to make room for more expensive ones. As a result, a bunch of poor senior citizens were being forced to move out.
One of them came to his office.
“You think you got it all figured out,” she said to him. “Well one of these days, someone smarter than you are is gona come along and give you a taste of your own medicine, and skin you right down to the bone! Then you’ll be poor like the rest of us. Only, you’ll be all alone. At least we have each other!”
The rich landlord looked slightly frightened by the old lady’s prophecy.
“Because if there’s two things you learn growing old,” she said as she walked out of his office, “one is nobody ever has anything all figured out, and the other is…”
She grabbed the cardboard cut-out of the fancy buildings he was planning to build and weakly broke it in half over her knee.
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
One reason people deny HBD, is they don’t believe intelligence has evolved much or at all since modern humans left Africa tens of thousands of years ago, and diverged into the races we know today. Earlier today, commenter Swank defended this view:
…there are GOOD REASONS to believe intelligence hasn’t ‘evolved’ since then…
with pleiotropic and polygenic traits, the mutation rate is SLOW
in fact, the mutations humanity’s forebears that changed intelligence took several hundred thousand/millions of years to occur at a time. and it just seems like they were mutations quickly went to fixation, altering brain size/cranial capacity/whatever.
homo sapiens have not been around long.
so, with regard to skin color, eye color, nose shape, whatever…..
these things are likely controlled by not so many genes and subject to new mutations adding genetic variance….and we see diversity here.
other traits? not so much.
There have been scientists who’ve agreed with Swank about intelligence evolving slowly. For example, geneticist Spencer Wells says in his TED talk (see the 14 minute mark of video below), that from 1 million years ago, to about 65,000 years, there was a long period of cultural stasis where stone tools and other artifacts shows virtually no improvement. Then suddenly after 65,000 years ago, the archeological record shows radical improvement. Wells believes this was because fully complex language began to appear around that time.
Scholar Richard Klein has been making a similar argument for decades, but Klein believes this major genomic change occurred 40 or 50 thousand years ago.
Both Wells and Klein argued that the change occurred in Africa and caused humans to expand beyond Africa, but apparently disagree on when humans left Africa, which is understandable given the uncertainties in dating such ancient events (increasingly, scientists believe that behavioral modernity occurred much more gradually than Wells or Klein implied, but there does seem to be a consensus that technology was largely static from over 1 million kya to less than 300 kya)
Like Swank, Klein seems to think it was the last major cognitive change, stating:
What happened 40,000 or 50,000 years ago was the last major change in the genotype. At least the last major biological change. Evolution continues, but the evolution that’s involved in making us capable of wielding this vast variety of cultures–that probably stopped around 40,000 or 50,000 years ago and there’s been no essential change since.
Forget about the construction of the first cities or the introduction of the internal combustion engine. The revolution that made the biggest difference occurred on the savanna of East Africa roughly 45,000 years ago, Klein and others maintain.
There has been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain
However scholar Greg Cochran is having none of it, stating:
I can’t think of any genuine reasons for thinking that human evolution had stopped. Some people seem to have thought that 40,000 years was small potatoes compared to the time since the chimp-human split (five or six million years), so that there wouldn’t have been much change over that time period. Of course this ignores the massive ecological changes that humans experienced over the last 40 millennia, and the resulting selective pressures. Others seem to have thought that newly clever humans instantly came up with a technological fix for any problem that arose, which would have removed the selective pressure associated with the problem. Face it, we’re not that smart. People suffered from malaria for thousands of years before figuring out that it was transmitted by mosquitoes (in 1897, by Ronald Ross) — and we haven’t knocked it out yet.
And often when we did solve problems, they didn’t stay solved. For example, whenever we came up with better methods of food production, population increased until people were hungry again. At that point you see selection for metabolic efficiency, for the ability to digest newly available foods such as milk, etc…
Speculating on why so many scientists believe there’s been little important recent evolution, Cochran states:
…Certainly some were (are) heavily invested in a vision of human sameness. I’m not sure how much of that is driven by practical payoffs: ethnic and racial differences continue to exist whether people “believe” in them or not…
…On the other hand, some certainly worry about political fallout of possible discoveries, about the impact on their NIH funding, etc….
… I think that some are genuinely confused. This is easier than you might think since very few biologists or human-science types know much about genetics and natural selection. Others simply don’t know much about human variation, while others are probably just spreading ink.
Cranial capacity has shrunk 10% in 15,000 years: that’s the fastest rate of change ever seen in the human fossil record, by far. Consider the number of genetic differences between humans and chimpanzees: they occurred over about six million years, from which you can determine the average rate or change. The number of genes that are apparently being replaced by new versions is much larger than you would expect from that long-term rate — something like 100 times larger.
Of course Cochran’s assuming the decline cranial capacity is genetic. Richard Lynn argued, that like the Holocene decline in height, it was caused by malnutrition/disease, and that its (full?) recovery in the 20th century partly caused the Flynn effect.
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
In my previous article, I described two studies showing Wechsler stability coefficients of 0.89 and 0.9 in young and older people tested 14 and 20 years apart respectively.
In the interest of full-disclosure, I found a study of Wechsler IQ that did not show such sky high long-term stability: 24 men and 24 women aged 39 to 44 were given the WAIS circa 1969 after having already taken it in 1956. The correlation was 0.73.
Of course the sample suffered from range restriction, with an IQ variability much smaller than the general U.S. population’s. This probably explains why the stability was only high instead of sky high.
There was also a study as cited by Ian Deary et al. that found a 0.78 correlation between Wechsler IQ measured at age 2 and at age 15 (incredible considering IQ is thought to be unstable before age 10 and especially before age 6), though oddly, only a 0.47 correlation between Wechsler IQ at age 9 and 15.
So for all studies I could find, comparing Wechsler IQ in people tested at least 13 years apart, the correlations are 0.73, 0.78, 0.88, and 0.9 with a median of 0.83, which is probably an underestimate given that most studies suffer from range restriction.
If anyone knows of any other studies of long-term Wechsler IQ stability, please let me know in the comments.
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
For all the talk we hear about neuroplasticity, it seems IQ, at least as measured by the Wechsler, is incredibly stable. According to a study by Erik Lykke Mortensen and his colleagues, there was an astonishing 0.89 correlation between WISC full-scale IQ measured at age 9.5, and WAIS full-scale IQ measured at 23.5 in a sample of 26 low birth-weight kids. That’s absolutely colossal. To put that in perspective, when a sample of 16-year-olds (n = 80) took the WAIS and WISC within an interval of one to six weeks, the correlation was 0.88 (WAIS-R manual, pg 48).
In other words, WISC IQ measured at age 9.5 predicts a young adult’s current WAIS IQ about as well as his WISC IQ measured a few weeks ago!
Wechsler IQ appears to be much more more stable than even height! For example, the correlation between adult height and height at age 13 was 0.7 in a sample of Copenhagen men.
And given the moderate to high correlation IQ has with everything from lifetime income to occupational status, its long-term stability is even more compelling. A psychologist can give a 9-yea-old an hour’s worth of silly games involving cartoon pictures, jig-saw puzzles, blocks, and funny riddles, and from that predict the trajectory of his life better than his teachers and parents. They are modern day prophets. The cartoon drawings of black children on the WISC-R are their tarot cards. Indeed the South Asian woman who gave me the WISC-R at age 12 even dressed like a fortune teller.
And yet they’re also scientists. Intelligence researchers were the ones who invented correlation, factor analysis, and other techniques scientists in all fields depend on, and they invented IQ tests, one of the single most stable, predictive, and fascinating measures science has ever seen.
Of course you can dismiss the study I cited above because the sample was not large and representativeness enough, but Steve Hsu independently reported similar data, from other tests:
From fig 4.7 in Eysenck‘s Structure and Measurement of Intelligence. This is using data in which the IQ was tested *three times* over the interval listed and the results averaged. A single measurement at age 5 would probably do worse than what is listed below. Unfortunately there are only 61 kids in the study.
The results do suggest that g is fixed pretty early and the challenge is actually in the measuring of it as opposed to secular changes that occur as the child grows up. That is consistent with the Fagan et al. paper cited above. But it doesn’t remove the uncertainty that a parent has over the eventual IQ of their kid when he/she is only 5 years old.
Another study of 141 adults found a near perfect 0.9 correlation between WAIS full-scale IQ measured at age 50 and 70 (a 20-year interval!).
Of course none of this conclusively proves IQ is as immutable as height or as solid as a rock. It could be that society is forcing stability on the brain by giving mental stimulation only to those who show promise young.
[Please place all off-topic comments for the week in this thread. They will not be posted in the main article]
Here’s another video about the Ukranian girl raised by dogs:
An extreme and unethical heritability study would be if dozens of twin pairs were separated at birth and one twin randomly assigned to be raised by dogs and the other twin raised by Ivy League billionaires. Then at age 30, the Ivy League billionaires would be tested on the WAIS-IV and the dog raised twins could be tested on a dog intelligence test (see video below).
The IQ gap between each twin and her co-twin would be colossal but it would be fascinating if a positive correlation was nonetheless observed. It would also be interesting to see if a human raised by dogs would score substantially higher than dogs on a dog intelligence test. Obviously they would if they were rescued and socialized, but what about before then?
Speaking of IQ, here’s Jordan Peterson discussing it again:
[Please place all off-topic comments in the most recent open thread. They will not be published here]
Thanks to Louis Lello and his colleagues, we now can predict a person’s height just from his DNA and these height predictors correlate about 0.64 with actual within sex height.
Of course this correlation is based on a UK sample and as Mug of Pee has long argued, when the environment is that narrow, you can’t be sure the genome is actually causing the height, or if some genomes just grow tall in particular countries for local reasons, but would not have a height advantage elsewhere (see reaction norms vs independent genetic effects)
Thus I was heartened to learn that this genomic predictor was tested in environments as diverse as South Asia, China and Africa. One of the studies authors Steve Hsu writes:
Note, despite the reduction in power our predictor still captures more height variance than any other existing model for S. Asians, Chinese, Africans, etc.
So the predictive power falls below 0.64 as we move far from the country the predictor was created in, but “still captures more height variance than any other existing model”.
So how well does any other existing model do? In their paper they write:
Recent studies using data from the interim release of the UKBB reported prediction correlations of about 0.5 for human height using roughly 100K individuals in the training
So this one genomic predictor correlates at least 0.5+ with within-sex height all over the world, suggesting a truly causal relationship. Squaring the correlation tells us that within sex height heritability (in the most meaningful causal sense of the term) is at least 0.25.
But why only 0.25, when twin studies suggest heritabilities of roughly triple that?
One possibility is all the flaws in twin studies, but another possibility is that common additive genetic variants only account for a small fraction of the heritability of height and other complex polygenetic traits like IQ. To find rare genetic variants you must look at the entire genome, but considering how expensive that is and how rare the rare variants are, few are willing to spend the money. In addition, there may be non-additive gene on gene interactions and if these are sufficiently complex, they may never be found.
Common sequence variants captured 83%, 77%, 76% and 84% of the total genetic variance for fat, milk, and protein yields and fertility, respectively
If human height is anything like these cattle phenotypes, then maybe we’ve found 80%, suggesting genomic predictors could go from explaining 25% of the within sex height variance to 31%, implying a predictive correlation of 0.56.
And if genomic predictions can achieve that much precision for within sex height, they can likely do the same for IQ once they genotype a sufficiently large sample (one million people) taking a sufficiently valid test (the WAIS).
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
The famous Bouchard twin study found a potent 0.75 IQ correlation for MZ twins reared apart. Note that the phenotype correlation for MZ twins reared apart is a direct estimate of broad sense heritability (H^2).
But even MZ twins raised apart may spend their early years together, grow up in similar homes, have contact in later life, and be self-selected for similarity. For this reason, back in August 2014, commenter Mug of Pee cited a little known critic of twin studies named Susan Farber. Mug of Pee wrote:
Clicking on the above NY Times article, the source for the 20% figure seems to be this paragraph (emphasis mine):
Defining ”reared apart” poses another great difficulty for researchers of twins. Different studies have used different criteria – such as age of separation, frequency of encounters between the twins or knowledge of the other twin’s existence. Dr. Farber, in her original and synthesizing role, has turned this confusion into an advantage. She devised a mathematical index with which she could measure the degree of separateness and used this information to correct the correlations found between the I.Q. test scores of twins reared separately. So corrected, the calculated correlation between twins’ I.Q. scores fell from a modest degree of within-pair similarity (accounting for about one-half of the variance) to a much lower degree of similarity (accounting for one-fifth of the variance). In other words, on the average, the more separately the twins were reared, the greater the difference between their I.Q. scores.
Presumably, the statement “one-fifth of the variance” is where Mug of Pee got his 20% heritability statistic, but 20% seems to actually be a squaring of the correlation between MZ twins apart (to get the percentage of variance explained). Taking the square root of 20% suggests that the corrected IQ correlation for MZ twins reared apart is 0.45.
This is much smaller than the 0.75 heritability found in the Bouchard study, but it’s still pretty high when you consider that heritability itself is a square of the genotype-phenotype correlation. Thus square rooting 0.45 implies a 0.67 correlation between genotype and IQ (among people raised in random homes).
So even after one of the biggest critics of twin studies corrects the data in a very biased way (according to her critics) genotype still predicts IQ about as accurately as SAT scores do (at least in countries like the U.S.)
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
One of the most extreme cases from the annals of IQ research is Isabelle:
Isabelle was discovered living in a darkened room with her deaf-mute mother as her only contact.
When Isabelle was discovered she was almost seven years old and had no sense of language.
She had been deprived of learning how to speak because of her mother being both deaf and mute.
As a result, when authorities found her they believed that she was also deaf and mute like her mother, because she could only make noises.
This was proven wrong when she started to speak after receiving intense training.
When Isabelle was initially tested, at almost seven years old, her mental age concluded to be at about 19 months old.
Within two months of being trained, Isabelle was putting together logical sentences.
Within a year she was already learning how to read.
While her IQ score was extremely low when she was found; at almost nine years old she was completely caught up with her peers and had a normal IQ.
The case is fascinating because when Isabelle was tested in 1938, being almost seven, she likely had a far bigger brain than the average 19 month old, yet scored the same as the average 19 month old.
Commenter Race Realist argues that IQ tests measure exposure to the culture, and he’s partly right because Isabelle’s lack of culture caused her development to be extremely delayed.
But what’s interesting is that it takes the average baby 19 months of culture to acquire the same level of skill that Isabelle acquired almost instantly. This shows that IQ tests are not merely measuring cultural exposure, but the brain’s physical development, and being almost seven, Isabelle likely had a much bigger and more complex brain than the average baby. Having the physical brain of a seven-year-old, Isabelle was able to learn in a few months what the smalled brained average baby learns in 19 months, and in just 2 years she had acquired 7.5 years of childhood intellectual development.
Once she had caught up to her chronological peers at age 9, her progress became completely average because her neurological development no longer exceeded her intellectual development.
She graduated from high school an average student.
The question is was Isaebelle ever less intelligent than her chronological peers, or was the test simply culturally biased against a girl who had no language or culture for the first 6.5 years of life? One wonders if on a more culture fair test, like the one the crow below is taking, she would have had an average IQ from the moment she was discovered.
Of course it’s worth noting that Isabelle was discovered young. Not all cases of extreme deprivation end so well:
[Please post all off-topic comments in the most recent open thread. They will not be posted here]
I was listening to an autism lecture by the great Simon Baron-Cohen and around the 49 minute mark, someone in the audience asks what Simon thinks of the research showing autism is a slow life history strategy while schizophrenia is a fast life history strategy. Simon is unfamiliar with the research but agrees it sounds plausible.
I’ve come across some fascinating research showing that autism is more common in higher social classes and schizophrenia is more common in lower social classes. In my opinion, this is because the higher social classes tend to be more nerdy (K selected) and the lower social classes tend to be more cool (r selected). The higher classes are nerdy in that they are more educated, more monogamous, more scrawny, and less sexually active. By contrast, the lower classes are “cool” because they are more blue collar, more muscular, more likely to get arrested, more into sex, drugs and rock ‘n’ roll.
Of course I’m not the first to speculate that autism might be linked to slow life history. There was a 2001 web page arguing that autism may have been inherited from Neanderthals:
Under harsh conditions it’s advantageous to mature and grow slower. This means individuals can survive on fewer resources. A consequence of slower maturing is longer life. Jack Cuozzo shows that Neanderthals matured slower than us, and probably got older. Autistic children often develop according to another slower scheme than other children, and may continue to develop into their 30s. 105106 It is also believed that a key factor in ADHD might be slower mental maturation. 107 Similar findings exists for schizophrenia
However this article did not make my point (made by Simon’s questioner) that schizophrenics have fast life histories, instead arguing they have slow life histories. And I’ve never believed that autism was inherited from Neanderthals, though I have speculated it might partly be an evolutionary adaptation to extreme cold. It may also be an adaptation to civilization as Philosopher has argued.
Commenters like Race Realist are constantly arguing IQ tests are pseudoscience, but as Jordan Peterson cleverly noted, if you reject IQ, then you have to reject all of psychology, because IQ is the best validated construct psychologists have ever come up with.
Race Realist argues that IQ tests are based on circular logic because tests are constructed so that people considered smart score well. While that’s partly true, we’re now at the point where IQ tests can be constructed by wholly objective criteria such as the degree to which test items correlate with the general intelligence factor (g) derived from a factor analysis of a large battery of tests.
If Race Realist thinks IQ tests are circular and pseudoscientific, what does he think of AQ questionnaires (Autism Quotient measures)? These tests have questions like “do you like numbers?” and then report that math majors are more autistic. This seems much more circular to me than IQ tests .
I think the best way to study autism is to avoid the questionnaires and instead just look at the most extreme cases (like Rain Man) than everyone in every culture could immediately agree is autistic. If autism is truly linked to STEM talent (as Simon Baron-Cohen argues) or slow life history (as I’ve argued), it should be evident in the non-autistic siblings and parents of people like Rain Man who would regress to a milder (sub-clinical) variant of the condition. On the other hand, if Rain Man’s relatives are just as likely to be bartenders as engineers, then autism is simply a disorder, and not a pathological extreme of normal (adaptive) variation.
Please post all off-topic comments for the week in this thread. They will not be posted in the main articles.
A few random thoughts:
Where’s the bartender?
The other night I walked into a bar and there was no bartender in site to serve me. Then a slightly overweight blond young woman stood up to go to the ladies room.
“I thought you were the bartender,” I told her, “cause you stood up just as I entered, I thought you were gona get me a drink”
“And I’m blonde too” she added with a laugh.
At the time I thought she meant blonds are sexy and thus more likely to get hired as bartender, so was flattered that I thought she was the bartender.
Then the actual bartender appeared: She was a 300 lb blond woman…
Friday the 13th
So Friday was Friday the 13th so naturally I went out to celebrate. But sadly a lot of men seem to get off on being very abusive to women. I saw a drunk white guy being abusive towards an arctic woman. He had told her she had no choice but to stay at his apartment because no one else wanted her and then took her iphone and through it in the snow.
A bunch of us were watching from the window inside the bar, debating what to do, when finally I said “I’ll go talk to him.”
“No don’t do that” warned a co-worker. “He’s psycho”
“I smiled,” and like Mrs Voorhees from the original Friday the 13th, I replied, “I’m not afraid.”
When I went out there, I asked the arctic woman, “Is he bothering you?”
When she said “yes” I decided to pull the Dr. Huxtable father-figure shtick, even though he was only about five years younger than me (that’s how drunk I was).
“Son, I think you have a problem,” I told him gently, not wanting to anger someone so aggressive. “And you need to talk someone about it. Maybe talk to a friend or councillor of some kind. And if none of them that works, you can always come here,” I said, referring to the bar.
Another Jordan Peterson video
Talks about how there are no jobs for people below 85 IQ and claims even a lot of lawyers will be out of work soon.
Most interesting thing he says is that people below IQ 80 take tens of hours to learn how to do a job most of his psychology students could learn in 10 minutes. This is consistent with the theory once proposed by a member of Prometheus that complex learning/problem solving speed doubles every 5 or 10 IQ points.
On the other hand, why does research show that total vocabulary is normally distributed? High verbal IQ people don’t have vocabs orders of magnitude greater than low verbal IQ people. Maybe there just aren’t enough common words for such a pattern to emerge?