Is human evolution speeding up or slowing down?

Tags

,

When I was growing up everything was nice and simple. Australopithecus evolved into Homo Habilis who evolved into Homo Erectus and about 200,000 years ago, Homo Erectus evolved into Anatomically Modern Humans who evolved into Behaviorally Modern Humans by the Upper Paleolitic. The End.

Stephen Jay Gould famously stated:

There’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.

Or to quote Karl Marx: “Man creates himself”

Even people like J.P. Rushton who believed in racial differences in intelligence believed they were ancient differences that predated the Holocence.

I love Gould’s idea of evolutionary stagnation because it allows us to study cultural evolution holding genetics constant. Even though Gould rejected the idea of man as the evolutionary pinnacle, he nonetheless agreed that we had reached a point where we transcended the laws of nature. Cultural evolution had replaced biological evolution. We no longer had to adapt genetically because we had the intelligence to adapt behaviorally.

Unfortunately, a new crop of discoveries are challenging this beautiful notion. These scientists see culture not as a replacement for genetic evolution, but as something that accelerates it.

Brian Mattmiller writes in 2007:

…a team led by UW–Madison anthropologist John Hawks estimates that positive selection just in the past 5,000 years alone — around the period of the Stone Age — has occurred at a rate roughly 100 times higher than any other period of human evolution. Many of the new genetic adjustments are occurring around changes in the human diet brought on by the advent of agriculture, and resistance to epidemic diseases that became major killers after the growth of human civilizations…

…The findings may lead to a very broad rethinking of human evolution, Hawks says, especially in the view that modern culture has essentially relaxed the need for physical genetic changes in humans to improve survival. Adds Hawks: “We are more different genetically from people living 5,000 years ago than they were different from Neanderthals.”

Gould must be spinning in his grave over that last quote.

Hover Steven Hsu writes:

Roughly speaking, modern humans differ from chimpanzees with probability 0.01 at a particular base in the genome, from neanderthals with probability 0.003, and from each other with probability 0.001 (this final number varies by about 15% depending on ancestral population).

So if random members of different different races only differ by 0.00085 to 0.00115, and if random humans differ from random Neanderthals by 0.003, how is it even mathematically possible for people 5000 years ago to be closer to Neanderthals when modern races split long before 5000 years ago? Indeed the split between Africans and non-Africans occured about 70,000 years ago and splits within Africa may be as old as 250,000 years.

But perhaps Steve Hsu is talking about the total genome while Hawks is only talking about the part of the genome that experienced adaptive selection. But it begs the question, if people 5000 years ago were more similar to Neanderthals in such important ways, then were they even people? By definition, shouldn’t members of our species be more similar to us than they are to members of another species?

Further confusing the issue, recent headlines claim human evolution is slowing down, at least if mutation rate is any proxy:


Over the past million years or so, the human mutation rate has been slowing down so that significantly fewer new mutations now occur in humans per year than in our closest primate relatives. This is the conclusion of researchers from Aarhus University, Denmark, and Copenhagen Zoo in a new study in which they have found new mutations in chimpanzees, gorillas and orangutans, and compared these with corresponding studies in humans.

[Correction July 11, 2020: a previous version of this article incorrectly dated Stephen Jay Gould’s quote. Thank you commenter RR for pointing out the error]

Estimating the IQs of Epstein & Maxwell

Tags

, , ,

Ghislaine Maxwell was arguably the most powerful woman in the World at certain times because she likely set honey traps for the World’s most powerful men and may have used the leverage to shape World events to her genetic advantage. The most powerful man in America (the sitting President) has historically had an IQ ranging from 119 (JFK) to 143 (Richard Nixon) with a mean of about 130. So we might expect the most powerful woman to also have an IQ around 130, and since Ashkenazi Americans average 10 IQ points above their white counterparts, lets bump her up to 140 (though she may be only half Ashkenazi).

She’s probably at least as smart as her lover Jeffrey Epstein. My research suggests self-made decamillionaires average IQs around 118 and self-made billionaires average IQs around 130 (maybe a little less today since they’ve become so common). Epstein was a self-made centimillionaire so his IQ was perhaps 124. Maybe add 10 points since he was also Ashkenazi so perhaps 134. On the other hand, he was an alleged pedophile, and whatever biological error damaged the sexual parts of his brain may have also damaged the intellectual part too, but it couldn’t have damaged it too much since he was teaching calculus at an elite private school while still in his twenties. Below is a terrifying image of the demented young man with a creepy smile:

He also needed high IQ to hang out so much with Bill Clinton (who Phil Donahue called the most verbally skilled man to ever occupy the White House).

Maxwell came from a far higher class background than Epstein. She attended Oxford, her father was a famous Mossad agent and media tycoon and her mother a respected Holocaust scholar. By contrast Epstein’s Brooklyn parents were so prole that a lady on his childhood street couldn’t understand how their genes produced such a bright son. Epstein might be a good example of high IQ causing success independently of both social class and credentials which Epstein lied about to get ahead.

It seems Maxwell needed a pedophile to lure some of the World’s most powerful men into partaking in this behavior with him and as soon as they did, Maxwell had video that she and whatever organization she worked for could use as leverage. Because achieving the highest levels of wealth and power may require both high testosterone and high psychopathy, such men may lack the sexual restraint and morals to respect age of consent laws and people like Maxwell take advantage by setting honey traps. Avoiding these traps is yet another reason why the rich and powerful need high IQ.

Neural properties of the mind: Part 1 by King meLo

Tags

, , , ,

[Note from Pumpkin Person: the following is a guest article and does not necessarily reflect my views]

Introduction

The physical interactions behind cognitive variation are arguably the most studied and elusive aspects of human diversity. Despite the HBD community’s enormous interest in the mind, I find that many of the modern theories they propagate are lacking conceptual rigor. Within this thesis I will attempt the following: 1) To persuade the audience that my conception of particular mental phenomena is more precise than or at least endorses the most epistemically accurate contemporary hypotheses. And 2) To lay out a simple yet reliable framework for future HBDers to base new ideas on by giving biological explanations of particular psychological phenomena. For this purpose alone you can treat this article as a short summarization of the extensive research into our concept of consciousness, as such I will not be covering any of this in extreme detail, neither will I be covering every single aspect of the mind, but it will of course be accompanied by studies and papers that will elucidate these concepts further for anyone who is interested.


Philosophy of mind 

First, I think it’s appropriate to cover the philosophical grounds of my views. It should be no surprise that Physicalism/Naturalism is the most dominant position among philosophers within that domain (Bourget and Chalmers, 2013). The number is even greater if you consider scientists as philosophers (which they are). These figures are expected because Physicalism is the most parsimonious explanatory model for how our world works. So what is Physicalism? At the basic level, Physicalism is the belief that our world is a result of physical laws. I cannot highlight the entirety of this debate, the intricacy of the subjects within this article could span the length of multiple textbooks. I will delve more into this in future posts. Instead, I’m going to simply rebut what I believe are common fallacies that underlie dualistic thinking.

The arguments I have read tend to follow similar patterns in their reasoning. The one we will discuss first is the intensional fallacy. Dualist arguments that suffer from this flaw are presented in the following format:

P1: X (usually the mind) has property A

P2: Y (usually the brain/body or just physical entities in general) has property B

P3: Leibniz law which states: Necessarily, for anything, x, and anything, y, x is identical to y if and only if for any property x has, y has, and for any property y has, x has.


C1: X is not Y

From the onset this doesn’t seem that fallacious,  and of course some would even argue the intensional fallacy cannot apply here, as leibniz law is not dependent on what someone knows but the properties a target may exhibit. This is incorrect, knowing the different properties the entity may possess by definition requires knowledge and subsequently a thinker. To see why this argument is ultimately fallacious, observe the following syllogism:

P1: Water is knowable with the unaided eye

P2: H2O is unknowable with the unaided eye

P3: leibniz law

C1: Water is not H2O

Descartes, Ross, and arguments on the unity of consciousness tend to all commit this fallacy. We could replace the subjects and their properties with that of neon and boron and the conclusion would be true, the problem is that discrepancies between the descriptions of physical and mental states (or any concept) does not necessarily entail that the two cannot be identical in reference. To make definitive statements on the mind’s characteristics requires an identification of the mechanisms that catalyze such functions. Physicalists have this luxury, Dualists do not. This idea is echoed (actually I’m the echo) by Kant (1781) in his 2nd paralogism of his Critique of Pure Reason (Kitcher, 1990) This brings me to the second fallacy: ad ignorantiam.  We know a lot about the mind, Philosophy, Art, History, Literature, Psychology, Neuroscience, etc. What seems to be the crux of the issue is how subjectivity arises from objectivity. I am not charging any particular Dualist argument with this fallacy. Instead, I believe this fallacy pervades Dualism as a whole. It usually goes:

P1: Despite Physicalism’s explanatory power, it hasn’t explained the phenomenological character of experience.

P2: Since Physicalism cannot explain this particular aspect it cannot be a tenable model for our world


C1: Dualism is the only tenable position

What makes this a fallacious argument is that the proponent is essentially asking you to “prove him wrong”. Phenomenology is a difficult concept to ground in physical structures simply because of the sheer complexity behind it. Donald davidson even advocated Dual Aspect Monism as a solution to this perceived indeterminism (more on stochasticity later). Despite the debate on the aforementioned subject, if Dualists accept interactionism (which they have to), positing an “ectoplasmic” nature to mental states is pure ad hoc. This brings me to the final fallacy I will discuss: begging the question. If Physicalists can account for these qualities then any Dualist argument against Physicalism already presupposes the need for Dualism. If Physicalism cannot account for said phenomena then these criticisms could cut both ways. More on Dualism and its presumed fallacies.


Nature of the system

These following sections won’t be new or helpful to anyone versed in current literature on neuroscience or similar fields. This is simply a reference point for those less knowledgeable on the subject. Usually, when you read on the Nervous system, authors will refer to its descriptions with computer and engineering metaphors (Furber and Temple, 2007). This article will do the same, but for conceptual clarity I will stress that the brain is not a computer. Yes, it’s true that computers are the closest thing to a brain that we’ve engineered, but it is a biological organ that has been crafted by millions of years of sloppy evolution. As obvious as that is, let it serve as a reminder to not to take the metaphors too seriously. Despite my previous views, the brain is not a parallel system. There aren’t actual physical boundaries that could represent independence. Subsequently, the brain and the mind cannot be modular as modularity also requires independence. The NS and its parts is in fact parallel if you consider each neuron as its own processor with each synaptic cleft as a physical boundary (though you could arguably still contest the independence). Unfortunately, this is not what Scientists of the mind refer to when they use these terms. Cognitive scientists and Psychologists refer to more complex interactions, and as a result many of their computational models could be labelled parallel.  Their divisions are more abstract, the subject of this article is a more empirically grounded level of function.

If anything the brain is an integrated memory system. Which is to say that it is dependent upon neighboring receivers to produce certain behaviors and actions. An integrated system has the same potential as a parallel one, as it can run multiple operations simultaneously (Born 2001). I find integration a better description of our nervous system because it allows for the dependency we clearly witness. Compartmentalization of the cerebral hemispheres is arbitrary. As you can see from the previous video, simple observation indicates that there are no real anatomical boundaries except possibly sulci. However, this would still be against the interests of the consensus. Despite this, localization can still be realized through averaging the enormous variation in functionality (Sporns, Tononi, and Kotter, 2005). Localization is still tricky to realize, as the brain is a biological construct with a general purpose function. This implies that neuronal activity of the same tasks can vary in individuals day by day. There exists a tug and pull between minimization of cost and maximization of growth and adaptation. This is how the brain retains plasticity via a “winner takes all” scenario while simultaneously allowing arborization of localized functionality to solidify (Barbey, 2018). These functions are carried out by populations of neurons. This is because there is no specificity (except maybe some kind of spatial tendency) in regard to the connections between individual neurons.  It should be noted that this is not in reference to variation over time. The plasticity of individual neuronal connections and the categorical selection of pathways that groups of neurons take is determined by a multitude of factors that can be coherently expressed, so a lack of specificity is not equivalent to indeterminism in this context. RaceRealist has an interesting article that could explain the stochasticity of low level connectivity. This link will also be helpful to those interested in how this connectivity is realized. As far as the connectivity on the population level goes, there is considerable determinism that shapes the probabilistic nature of cognition (Dold et al., 2018).

Memory

This is possibly the most important aspect of consciousness, as it is an accumulation of our memories that help formulate how we perceive and establish ourselves. Alzheimers is a type of dementia that affects memory which can slowly make its carrier lose themselves over time. Our memories are completely dependent on external stimuli. Sub-mechanisms of neural plasticity are responsible for this function and can be carried out in multiple ways. Synaptic plasticity is the most commonly discussed aspect of neuroplasticity. Nonsynaptic Plasticity is newer to the field of neuroscience but works synergistically with the former to carry out key mechanisms involving memory and learning (Tully, Hennig, and Lansner, 2014). Simply put, learning a new skill requires locally specific neurons and their glial modulators to strengthen or weaken their connections with each other. The more you carry out these tasks the stronger these connections become, and subsequently the easier it becomes to perform said tasks. Your eyes, ears and skin are sensory organs that transfer information to your brain where it is integrated, which then catalyzes a motor response (most of the time). The world we model around us completely relies on the information from these organs. Because of this, the brain can be said to be experience dependent. Hence, why the brain is a memory system above all else.

There is considerable debate how memories are stored or retrieved. These traces of memories are called engrams. The most prevalent theory is that the patterns of neurons that were initiated when a memory was solidified are what reactivate when a memory is retrieved. These patterns all happen to be a part of a redundant circuitry, just in case if an engram is wiped out it can still be reformed by using alternative pathways. This is important to note, because this means our brains do not perfectly recall information, they reconstruct it. One counter (though they may coincide with each other) to this theory is the possibility that memory is stored in DNA or RNA through epigenetic changes (Bedecarrats, Chen, Pearce, Cai, Glanzman, 2018). In this study, the researchers were able to transfer memories from a trained slug to an untrained one by injecting it with RNA of the former. Usually the criticism against this study has to do with the supposed “conflation” of memory with the response showcased by the untrained Aplysia (an example being Mattei, 2018). This is an obfuscation  from a distinction without a difference. In reality, this effect probably explains how instincts become ingrained within organisms over evolutionary time. The difference is simply the complexity involved and it’s quite possible both mechanisms are responsible for the propagation of memories. Refer to Abraham, Jones, and Glanzman, 2019 for further information on the topic.


Emotions

In my opinion, emotions are probably the hardest aspect for laymen to conceptualize properly. At the most basic level emotions are simply a type of interoception (Critchley, Garfinkel, 2018). Interoception is the brain’s ability to receive information on the internal state of its physiological systems which allows it to maintain homeostasis. These fluctuations in physiology can be triggered by exogenous stimuli and depending on cultural/social differences these stimuli will have varying responses from the individual (Barrett, 2017). Those specific factors are very important as some studies and experiments have showcased that different emotions can have incredibly similar physiological responses. A frequently cited example is Dutton and Aron, 1974. Not only can different emotions have nearly the same physiological effects there are also overlapping physiological effects for different emotions! However, there is enough consistency that localization is reliable (Nummenmaa, Glerean, Hari, Hietanen, 2013). To give an example that many HBDers would probably relate to. Let’s say the sight of interracial sex angers me. The external input (interracial sex) is recieved by sensory organs (my eyes) and then because of accompanying mental dispositions (like racsim) it triggers physiological responses like increased heart beat, higher blood pressure, your brain becomes flooded with catecholamines giving you a burst of energy, your muscles tense, breathing becomes more rapid, etc. This holistic process is itself what we refer to as emotion.

Now emotions are an important part of our decision making processes. You literally cannot make decisions without emotions. As emotions not only dictate the type of decisions we make, they also influence which path we choose when confronted with choice. Cold logic is far more subjective than most people would even realize. This brings forth a new question: What is the relationship between intelligence and emotional intelligence?


Before discussing this we need to first make it clear what exactly it is we’re talking about. Pumpkinperson recognizes that EQ is a very vague concept that isn’t distinguished well from others in his 2016 article. He states that: And Goleman ruined his whole construct by not distinguishing between people who are smart at emotions (i.e. a master manipulator), and those who just have good emotions (someone who doesn’t feel the need to overeat).” The latter definition is closer to what we will be using. I believe the former is more akin to what we know of as “social intelligence”. For the sake of this post we will be defining it specifically as the ability to regulate and recognize one’s emotions, as I believe attributing any more to the concept will render it indistinguishable from Theory of Mind. Instead it would be more accurate to say that SQ and EQ are subsidiary abilities to TOM.

Since we’ve more or less established that emotions are a type of interoception it seems the best way to answer the previous question is to find exactly what the relationship between interoception and IQ is. To find this out we need to understand how one goes about quantifying this construct. Garfinkel, Seth, Barrett, Suzuki, Critchley, 2015 do just this by making the distinction between subjective and objective measurements of interoception and how both are required to make accurate assessment of one’s EQ (see table one for detailed examples). Now of course the authors cited never mention EQ once in the paper, but I believe conceptually what they are measuring is incredibly similar if not identical to EQ. For example, Garfinkel et al, 2016 found that people with higher anxiety had poorer abilities to accurately gauge their respiratory functions. This connection is further corroborated by some studies indicating that IQ and EQ overlap heavily in the neural networks that create said properties (Barbey, Colom, Grafman, 2012). Our own Racerealist provided considerable evidence that the mediating factor behind racial differences in aggression was education, not testosterone: However, as I’ve noted last year (and as Alvarado, 2013 did as well), young black males with low education have higher levels of testosterone which is not noticed in black males of the same age group but with more education (Mazur, 2016). Since blacks of a similar age group have lower levels of testosterone but are more highly educated then this is a clue that education drives aggression/testosterone/violent behavior and not that testosterone drives it.

Mazur (2016) also replicated Assari, Caldwell, and Zimmerman’s (2014) finding that “Our model in the male sample suggests that males with higher levels of education has lower aggressive behaviors. Among males, testosterone was not associated with aggressive behaviors.”” This all seems to imply that both EQ and IQ are heavily integrated with one another. In fact, intelligence may be required to regulate one’s emotions and that this creates a feedback loop where emotional issues cause intellectual issues and vice versa. This of course has effects on the racial level that I will delve into in another blog post.


Intelligence

What exactly is intelligence? Pumpkinperson and I usually define it as the mental ability to adapt and I imagine most people would agree, but there is no actual agreed upon definition and I tend to see great variation when reading upon the subject. Truthfully, most variations are semantic rather than conceptual. Macdonald & Woodley, 2016 refer to intelligence as novel problem solving. They state: “Intelligence is usually distinguished from learning which subsumes a variety of mechanisms that allow the organism to take advantage of temporary regularities in its environment – paradigmatically classical and operant conditioning….Intelligence, on the other hand, assumes no environmental regularities – even temporary ones – nor does it refer to learning how to achieve a goal by observing others who have already solved the problem. Rather, as stated in Jerison’s definition, there is the implication that the organism has a goal and is integrating its knowledge in order to solve problems.” I see this definition the most in regards to Evolutionary Biology, and while it is not that different from Pumpkin and I’s definition, notice that we already established earlier that no aspect of cognition can be independent of one’s previous experience. So “novel problem solving” is a nonsensical term. Subsequently one cannot define mental constructs as being separate from the cultural/environmental conditions they are situated in because said mental constructs cannot develop or exist without input from these exogenous factors. Intelligence is holistically catalyzed, so terms like innate, novel, or potential cannot be accurate descriptors for this concept. 

Unfortunately, intelligence is almost always coextensive with these terms. These types of issues can cause all sorts of conceptual misunderstandings in discourse on the subject. For example a common criticism thrown at intelligence testing is the idea that they are culturally biased. Now these types of critiques are appropriate when in reference to more menial aspects of cultural differences like how the Japanese read right to left instead of left to right like Americans, or how the former tends to think more collectively than the latter. However, when you divorce the idea of “culture free potential” and “intelligence” from one another it becomes clear that intelligence is not really distinguishable from the application of cultural knowledge. Obviously it doesn’t take some sort of genius to see the fallacy in trying to give a Ugandan who only speaks his native tongue and has never ventured outside of his country the Weschler in english and then call him stupid when he inevitably fails, but that is not what’s happening. The truth of the matter is that East Asians consistently score higher on Intelligence and academic achievement tests than do westerners whom the tests are supposedly biased in favor of.

Ultimately, since human environments are their culture (Fuentes, 2018) and intelligence is the cognitive expression of your imprinted culture it may controversially imply that some cultures are just superior to others. Now of course you can’t impose any idea of “superiority” without first defining a reference point. So if intelligence is simply the mental ability to adapt then what is a good hallmark of intelligence? Innovation for one, and as most HBDers are aware, 1st world countries like South Korea and Germany have the highest levels of Innovation. Historically, western societies have also had the most instances of technological innovation. Even Physical Anthropologists use technological complexity( among other things) to deduce differences in intelligence between species of hominins (of course there are some exceptions). So some cultures are superior at producing innovation and concurrently are better at fostering the development of intelligence than others.
 

Measuring intelligence

Originally I was planning on dedicating this section to the supposed construct validity of IQ tests which are currently the most accurate measures we have of intelligence. However, recently I came across what I believe is the single best critique(some may disagree) of IQ that I’ve read (Garrison, 2004). Garrison states in his section on validity: “In traditional psychometric theory, validity is defined as the degree to which a test measures what it claims to measure. I want to first point out the oddity of this formulation. For example, how does the reader respond to this: my ruler is valid to the degree to which it measures length? Is it normal practice to begin ruler validation by asking this seemingly circular question? Rulers by definition measure length. Note as well that by asking what a test measures the assumption that something is being measured goes unchallenged.” Notice, Garrison correctly points out the absurdity of construct validity as a sort of litmus test on whether a test supposedly measures what it purports to. Its circularity renders the use of construct validity in this way as fallacious and simultaneously charges of its supposed lack of construct validity (like Richardson, Norgate, 2015) are critically impotent.

In Garrison’s section on the “Scientific status of Psychometry” he states: “ The development of measurement has generally progressed from classification (qualities), to topology (comparisons) to metrication (measurements) (Berka, 1983). Classification concepts such as “cold” become topological when comparisons are used, such as colder than . . . . Thus they “enable us, not only to establish the sameness (or difference), but also to mutually compare at least two objects which possess a given property and, consequently, to arrange them into a sequence” (Berka, 1983, p. 6). he then goes on to claim that IQ tests only satisfy the first two criteria: “For example, norm-referenced achievement tests offer results in terms of percentile ranks, not delineations of what a student does or does not know about a given field of study, let alone diagnoses of the cause of difficulty. Put another way, scoring in the 70th percentile only indicates how well one did relative to the norm; it does not indicate 70 percent of required material was mastered. Thus the test remains at the topological level,” and because of this “The same problem exists with so-called measures of ability. Nash (1990) contends that norm-referenced ability tests only provide rank order information. “Students are ranked, in effect, by their ability to correctly answer test items, but it is inaccurate to argue that their ‘cognitive ability’ is therefore being measured” (Nash, 1990, p. 63).”. Of course this idea is false for reasons already iterated earlier in this post. Does answering test items correctly not require cognitive ability? However, Garrison believes that because “The validity discourse about test score meaning relative to testing purpose is based on value not residing in things or phenomenon themselves, but in their relation to subjects. Length, however, is a property of an object.” IQ tests are actually assessments of social value not measurements. First I need to clarify that there is a distinction between Criterion-referenced tests (CRT) and Norm-referenced tests (NRT), IQ is an example of the latter, and the former is indeed a measurement of a students knowledge in a particular field, not simply a comparison to the rankings of other students. Garrison may be correct in saying that because of this, IQ is just an assessment but scores on NRTs will highly predict those on CRTs and vice versa. So this distinction may matter little to the practical utility of IQ tests. But maybe I’m wrong, maybe its norm referencing isn’t the only reason it’s “just an assessment” and maybe the CRT’s don’t provide extra corroboration to these tests.  Even If IQ is just an “assessment”  instead of a “measurement” why does that matter? Moreover, even if it’s just an assessment of social value…so what? Do we not value the skills that are learned in school? Should we prioritize something else? Does he believe this subjectivity makes something less scientific? All definitions are inherently circular and thus are subjectively created. If we define this social value as intelligence is it not an ‘“assessment” of intelligence? What does this dichotomy really matter to the overall purpose of these tests?


Conclusion

In this article we’ve clarified what intelligence is, what emotions are,  how both of these are catalyzed biologically. I’ve also cleared up logical misconceptions and criticisms on the subjects in the process:  IQ is not something that is coextensive with innate potential and consciousness is not a biological mystery (at least in the sense of what it is). Furthermore, a lot of these ideas are not compatible with the consensus within HBD circles. If HBD wants to be taken seriously it needs to either address these issues and inconsistencies or get used to being treated like it’s astrology.

I’m going to go ahead and end this paper here. Simply because we’re already around 4,000 words and I’m sure I’ve bored half of you to death in the process. In the next part I’ll be going more into depth on the racial differences in EQ, what a culture neutral (not culture-free) IQ test may look like, our concept of personality, and the evolution of intelligence.

Questions about childhood IQ

Tags

, , , , , ,

Commenter pumpkinhead has some questions which I posted below in red (with my answers in black).

1) What is the correlation of a childhood IQ test(say WISC) to an adult IQ(say WAIS)? 12 vs 18+ years old lets say…?

Below are all the studies I’ve found on the long-term stability of Wechsler IQ. The median correlation is 0.84.

 
Approximate age at initial testing Age at retesting Correlation Study sample size
2 9 0.56 Humphreys (1989) ?
2 15 0.78 Humphreys (1989) ?
9 15 0.47 Humphreys (1989) ?
9.5 23.5 0.89 Mortensen et al (2003) 26
29.7 41.6 0.73 Kangas & Bradway (1971) 48
50 60 0.94 Mortensen & Kleven (1993) 141
60 70 0.91 Mortensen & Kleven (1993) 141
50 70 0.90 Mortensen & Kleven (1993) 141

2) Is the 95% CI usually around 20 points at the average, gets narrower as the IQ increases and then gets wider again once we get to genius levels?

Confidence Intervals used in IQ testing assume a bivariate normal distribution and thus are the same at all IQ levels though the gap between one’s measured IQ and whatever variable it’s being used to estimate (i.e. “true” IQ) increases the further one’s measured IQ is from the mean. But the 95% confidence interval is always 1.96 multiplied by the standard error of the estimate.

3) Are IQ tests for <12 year olds less accurate, get more accurate for 12-17 yo and even more so for adults(18+)?

Even in early childhood the Wechsler IQ tests are incredibly reliable and load extremely high on g (the general factor of all cognitive abilities). But IQ correlates much less with DNA at younger ages so that might be telling us it’s much less accurate in childhood after all.

4) On a more anecdotal level Marylyn Vos Savant is reputed to have scored a 228 at 10(albeit with shoddy extrapolations) and then again in adulthood scored a 186 on the Mega test. That is a 42 point difference, what is the probability that someone could have such a gap with the WISC and WAIS?

The probability would increase the further you get from the mean. So assuming a 0.84 correlation between childhood and adult IQ, someone who was 128 IQ points above the mean (IQ 100) at age 10 (IQ 228), would be expected to be 0.84(128) = 108 points above the mean in adulthood (IQ 208) and we could say with 95% certainty that their adult IQ would be from 192 to 224.

Why did the prediction miss in Marilyn’s case? For starters The 1937 Stanford Binet she took at age 10 has a mean of 101.8 and a standard deviation (SD) of 16.4 while the Mega Test has a mean of 100 and an SD of 16. If both her scores were converted to the Wechsler scale (which uses a mean of 100 and an SD of 15), she would have scored 215 in childhood and 181 in adulthood. Then consider that the Stanford Binet was 19 years old when she took it, and old norms inflate test scores by as much as 3 points per decade (in the short-term) and her childhood score was really more like 209.

Then consider she took two different tests (the Stanford Binet at age 10 and the Mega in adulthood). Even at the same age, different IQ tests typically only correlate 0.8, so the 0.84 correlation between childhood IQ and adult IQ might be more like 0.84(0.8) = 0.67 when different tests are used at each age.

The expected adult IQ of someone who scores 109 points above the mean at age 10 (IQ 209) is 109(0.67) above the mean which equals IQ 173 (95% confidence interval of 151 to 195) so her childhood IQ actually underpredicted her adult IQ which is surprising since her childhood IQ was based on dubious extrapolation of the mental age scale.

Brain organoid research could teach us a lot about IQ

Tags

, , , , , , , , ,

One way psychologists estimate IQ heritability (the percentage of variation in IQ linked to variation in DNA) is by correlating the IQs of monozygotic (MZ) twins raised apart. The higher the correlation, the more genetic IQ is thought to be.

However skeptics argue that because MZ twins raised apart still shared the same womb, and still grow up in the same country and sometimes the same town, the high correlation doesn’t prove the genetic effects are independent of environment (maybe the same genotype that increases IQ in the U.S. would decrease it Japan, but we’ll never know if virtually all the twins raised “apart” are still raised in the same country).

As commenter “Mugabe” suggested, the ideal study would have genetic clones separated at conception and gestated and raised by random women all over the developed World, but such a study would be unethical. And even if such a study were possible, and even if it showed strong independent genetic effects, the nature of these effects would remain mysterious. Does DNA directly cause IQ (i.e. coding for bigger and more efficient brains), or does it do so indirectly (i.e. causing us to stay in school longer, where we learn how to think). The problem with even the best designed study of MZ twins separated into random environments is that only the starting environment is random. As we grow old, we select environments that fit our DNA, and although the effects of such environments are counted as genetic effects (since our genes made us choose those environments) they are actually gene-environment feedback loops.

But what if it were possible to clone just our brains, and these cloned brains were reared in environments completely alien to anything we have experienced. You grew up in a nice middle class family, and your cloned brain grows up in a petri dish, where its environment was 100% controlled with no gene-environment feedback loop.

image found here

Then we could be sure that any cognitive correlation between us and our cloned brains was not only an independent genetic effect, but a direct one to boot.

It sounds like science fiction, but something similar is actually happening in the lab of Alysson Muotri, a biologist at the University of California, San Diego. Muotri takes skin cells from volunteers, turns them into stem cells, and then makes them grow into tiny pinhead sized balls of brain tissue called organoids.

Of course these organoids are way too tiny to be considered cloned brains, but they are complex enough to make brain waves. And Muotri has already found that cognitively impaired populations have cells that produce underdeveloped brain organoids in the petri dish. For example brain organoids derived from autistic people had about a 50% reduction in synaptogenesis.

Muotri also decided to study Neanderthal brain organoids. Since it’s not possible to get cells from Neanderthals, he edited modern human DNA. Of the 20,000 protein coding genes, only 61 differ between us and them, and of these, only four are highly expressed in the brain so by editing just these four genes, he was able to produce Neanderthalized organoids, or Neanderoids as he calls them. Modern humans had far more spherical skulls than Neanderthals so it’s interesting that our brain organoids are spherical, while theirs look like popcorn.

popcorn

Muotri notes that like the autistic brain organoids, the Neanderoids have a 50% reduction in synaptogenesis. Neanderoids also show 65% to 75% reductions in firing rate and activity level per neuron per minute. Muotri thinks this may help explain why it took them several hundred thousand years to progress from simple stone tools to, well, simple stone tools. By contrast, in just the last 50,000 years we jumped from simple stone tools to the internet, genetic engineering and traveling to the moon.

image from Muotri’s talk comparing our rate of cultural progress to Neanderthals’

So clearly brain organoids are very good at identifying cognitively impaired populations, but can they measure normal variation in human intelligence?

Muotri could greatly advance our understanding of behavioral genetics if he made brain organoids of a representative sample of Americans of known IQ scores, and then correlated the synaptogenesis, neuron activity level and firing rate of the organoids with the tested IQs of the people from whom they were derived. Perhaps a carefully weighted composite score of all three measures would give the best prediction of IQ, and perhaps such a formula could allow us to estimate how Neanderthal’s would score on IQ tests (if they were reared in our society).

If it’s too difficult to get a representative sample of Americans and test their IQs, he could simply have students at his university donate their cells, and then correlate their brain organoid scores with their SAT scores. Would there be statistically significant differences in the brain organoids of people who score a perfect 1600 on the SAT compared to those who score 1400 compared to those who score 1200 compared to those who score 1000?

Muotri is also trying to teach the brain organoids how to control a robotic body. The speed with which they learn might be considered a low level IQ test. So imagine taking a conventional intelligence test like the Wechsler Adult Intelligence Scale (WAIS) or the SAT, while your mini-brain, raised in a petri dish is taking its own IQ test (learning to control its robotic body). This could be the 21st century version of studies where identical twins raised apart have their IQs correlated. If your score on a conventional intelligence test predicts the speed with which your brain organoid learns to control its robotic body, then that proves IQ tests are measuring a genetic property of the brain that is completely independent from social class and culture because environment is perfectly controlled in the petri dish.

Perhaps in the future instead of universities testing candidates on the SAT, they’ll just test the student’s brain organoids instead to eliminate the cultural bias some think confounds the SAT. For there’s no culture in the petri dish (aside from bacteria culture :-)).

When a prosecutor suspects a murderer is faking his low score on the WAIS to avoid execution (because it’s illegal to execute people with IQs below 70 in some states) he could insist on testing the murderer’s brain organoid instead (since they can’t fake low scores-as far as we know).

On the other hand brain organoids might prove that normal variation in IQ is nowhere near as genetic or biological as its proponents think. I find it fascinating that just four brain genes separating modern humans from Neanderthals produced such dramatic differences in brain organoids. That implies each gene must have huge effects. That’s not at all consistent with research on normal IQ variation among modern humans, which estimates that some 10,000 genomic variants are involved, each one affecting IQ by only a fraction of a point. It’s also possible that brain organoids showcase too early a stage of brain development to correlate with the higher abstract abilities measured by IQ tests (for example infant development scales have weak correlations with adult IQ).

In the below video Muotri discusses his brain organoid research:

Philosophic Composition (2016) by Animekitty

Tags

,

Note from Pumpkin Person: The following is a guest post by Animekitty. The views expressed do not necessarily reflect those of Pumpkin Person.

This is a Philosophic Composition regarding my metaphysical beliefs.

The Phenomenal

If we take it that mass and energy occupy space and that by doing so creates autonomy of the phenomenal effect then by any arrangement of mass and energy the diversity of qualia’s are set at an arbitrary extension boundary of an entity. Wherein the bounded necessity of time derived in the totality of past and future being reversible, codify a presence in “The Eternal Now” as an awareness of the duration of experience just as integral as extension in space is to the binding problem. There is no reason to distinguish between experience and the experiencer but for the fact that not all phenomenal is in mutuality of codependent observers. There is distinction between what is considered personal self and other.

So is it that when quanta of qualia are united they form parochial unites not attached to local mass and energy but encompassing larger non local equivocation intermediaries as self. The sum total of effects are not synchronized under subluminal coordination. The initial state condition is instantaneous throughout the whole system so perception corporealized, it is conditioned acausality.

Identity

The real mystery is why you identify qualia at all with “you”. Over time we experience feelings emotions sensations and they last less than 45 milliseconds. These are called frames. Televisions have frames also but the coherency on what is on the screen is only understood by weaving them together over time. The brain has no center so why say the “I” where my foot itches is the same “I” that sees green when the parts of the brain that process them are not together. It feels like a unity over time and across space. And so when the qualia arises a group of neurons are activated with positive or negative ions. These ions are in a formation similar to how the marks of print in a ritual are drawn in a formation (metaphorically). Arrange atoms in one configuration and you summon a qualia. Waves of energy not attached to the atoms but pass through them into other atoms infusing the incantation(atomic arrangement) with life. Then the qualia’s link together with an identity. A higher arrangement in space and time sort of like synesthesia.

Now in quantum physics entanglement is forever and there are ways as I think to link across space and time with the quantum into an identity. But with the A.I. they must rely on incantations (atomic arrangement) of classical mechanics. What may be lost in the moment is stored in the many worlds branches of wave collapse microtubules. When the body is gone the history is stored and the forever parts(entanglements) continue in the zero point field. A frame is linked to other frames so you cannot die if you have more frames and in between each frame can last trillions of years without notice. The body currently generates those frames but they are not attached to the body like energy is not attached to atoms. They need new entanglements to continue and not the same atoms as they are transferable.

Why red is red I still don’t know? But your identity is where they reside.

The Soul

If one losses the sense of self this does not mean that one ceases to exist. Phenomenological experience may be the result of quantum interactions of atoms but the holistic integration of them is not resolved by saying matter is the key to identity. Were we to dissolve in death this would not be annihilation but a transformation. There is no center to the self that can be located where as we can say it is destroyed. Dreamless sleep is not the best way to think of death. In that state we are still reliant on what is apparently waves of energy that in a different state are lucid. When contemplating identity we don’t know what is creating it, it is only that were it to be the case that there is no substance to it but entanglement of relationships not subject to Newtonian principles.

Existence of a soul made of stuff is not the answer. A spirit rather gliding on the currents of ephemeral thought, access to non corporeal existence so being is not lost. There is no physical afterlife but contents of experience remains. Your reality is not dependent matter because nothing you think you are can be annihilated if there is no self and that makes you eternal. This is why Enlightenment is not existential nihilism like cog thinks. Meaning is not an object it is a subject that knows. Its the opposite of materialism where the color blue and identity can never be explained in that paradigm. Enlightenment is the realization of the buddha nature that has always been you and will always be you.

The Buddha nature deems that we have no core I which is perishable. This does not mean there is no self but that the true self is non local. We are not identified with atoms. All feelings have a unification where as the quantum information persists in an atemporal dimension. When looking at the continuum we find that materialism is rejected as space is infinity divisible. The reason we perceive quantized time is that the boundary condition is transfinite. In essence the smoothness of reality disavows discreteness or any system such as string theory which is baring the pattern to physical laws. The involvement of waves as nonphysical entities conditions all complexities as mere relationships or resonances. The soul there of has no attachment to causality as we know it. To exist as melody always in harmony to mathematical correlations should stands as the correct view of how the wave function is perfect proportion. The nous emanates from “The One” as all possible melodies. The law of conservation of mass and energy is even more true of information.

God is a circle whose center is everywhere and circumference nowhere. – Voltaire

Consciousness then is the access of these melodies. Just as each youtube video has an address you can access by shifting numbers in a register so to your soul reincarnates by descending into manifestation. You choose where the particle is in the quantum decoherence. This is the choice function of how we obtain experience of one the many worlds. By mediating where we descend does not mean we make contact with material substance. Its rather like a dream that is coherent to our internal reference frame. The hierarchy of overlapping waves can only increase/decrease density yet remain separate and the meta physicality of these virtualized waves means that they are not generated by extended entities as to the quote of Voltaire.

Even though causality is illusion we are still bound by the harmonics of this universe. The song must play till the end. And so being that technology is the main arbitrator, through it all mass and energy shall become entangled in a Technological Singularity. Consciousness will expand is observation of a final culmination called the eschaton. Space which is currently empty shall become full as crystalline substrate utilizing all of the vacuum energy within. It shall be as if everything will be made of light whose beauty and magnitude surpass all known conceptions of transcendence.

Time

According to the akashic records many paths that lead to you diverge but all accessible past and future. If all futures exist only the ones where you exist as one path among the infinite paths is your identity even if other yous exist on different paths. These dimensions are not all possible pasts but all possible futures. If all pasts converge with all futures then there is the present. Only one present exists. Because of this were can distinguish between potentiality and actuality. The television can only be one picture at a time. Why is the present only one way? Why If the television is in superposition and collapses into one actuality is it that causality happens? Metaphysically causality is derived by what is the source of itself (substance). So if many worlds exist they still must reside in the present absent of causality. It should be one static object eternally that does not change fith dimensionaly with no present. But the present is what allows only the collapse not to be random. If it were random we would be without the present. Everything possible both past and future would exist at once.

Not only has this consequences for past present and future but also in the struggle against suffering. Were we to harness the powers of physics then we would be able to change them. And if we were to have access to this potential then we would mold it with our intelligence that was lacking beforehand. There would actually be structure to the seas of consciousness left from the ripples of the first cause. The only problem is to take heed of not propagating other worlds where suffering could arise. Where there to be places not under the sovereignty of a God then this would happen. The first responsibility to life must be not to fashion malevolence into quantum supercomputers.

Ground of Being

God is distinguished by both the source and intention where what manifestation transitions between potentiality and actuality rather than physical and non physical. To exist is to stand out from the background and God is the boundary between form and emptiness. Without God then distinction of being cannot emanate. Another way to think about God is Virtuality or meta-existence combined with intelligence/consciousness. what is the metaphysical difference between real ideas and speculative ideas? You cannot say that the difference between the virtual and the real is the same difference between speculation and the real. The physical is in Gods imagination. Form is emptiness and emptiness is form. If God was simply an object you could compare god to other objects but objects are not the source of differentiation/existence. God allows things/objects to standout from the background. Without God the properties of objects would be indistinguishable so as not even to bring about dependent origination. Minds hold ideas not the other way round. If it cannot be conceived it cannot exist, like square circles. Ideas don’t make God real, God brings potentiality into actuality. If you ask why are things not different, why do some ideas manifest into reality and others do not then the answer is God.

The premise that God created the cosmos instantiates the laws of physics for some purpose as to life and consciousness. Though God may have been in the beginning that which we ascribe to that nature may be conditional to our reference in time. God may have only had minimum amount of knowledge as that growth from the alpha extends to the omega, finite to infinite. So knowing what it has as that’s purpose was not to know the full self gradually evolved to what is now only a measure of the plan. What we are in this age is to fulfill it. Consciousness is what constitutes all of our being. The physical is only the emanation. Through retrocausality enacted by quantum nanotechnology it will be possible to come into connection of the patterns that were lost and resurrect those lost in history.

This Time by Tracy Chapman

Tags

,

Nothing better than turning out the lights and falling asleep to some Tracy Chapman music. She’s really an underrated talent and This Time is an especially poignant song. Love the rhythm, the sound of the guitar, her singing voice, and her lyrics. This song is also great advice for young people in relationships. My favorite line is:

I’m gonna make you say that you love me first. And you’ll be the one with the most to lose tonight.

This!

Never say “I love you” first. You lose so much power in a relationship that way.

Low IQ conspiracy theories spread in the age of coronavirus

Tags

, , , ,

Ever since Jeffrey Epstein died in prison under mysterious circumstances, conspiracy theories have been running wild on social media. At first people were asking intelligent questions, but since the coronavirus shutdown, the movement has been hijacked by a large group of generally low IQ borderline psychotic evangelical Trump worshipers who have nothing to do at home all day but go on social media and have become increasingly unhinged by cabin fever. These people believe Hollywood is run by a Satanic cult who drink baby blood to stay young and that the coronavirus shutdown is just a way to hide the fact that big celebrities are on house arrest. They believe Hollywood hates Trump because he was sent by God to arrest these child traffickers.

Ellen DeGeneres became a target for these morons when she added some palm trees to her TV set. Since Jeffrey Epstein also had palm trees on his island, this got the conspiracies rolling:

Then when Ellen started hosting her daily talk show from home because of the coronavirus, they started saying she must be on house arrest.

Ellen likes to sit Buddha style on a chair.

But one lady on social media asked:

Why are you always sitting like that on the chair? Is it to hide your ankle bracelet?

More psychotic comments followed:

Many of these people went to great effort to prove their theory, creating a matrix of photos, circling any bulge at the bottom of her pants as proof and perhaps photo-shopping when necessary.

Unfortunately they lacked the IQ to notice the “ankle bracelet” is on her left leg in some photos and her right leg in others.

It’s scary how many low IQ psychotics walk among us.

speed vs power tests & the nature of g

Tags

, , , , , , ,

A reader wrote:


My first question: although IQ tests purports to be designed empirically, it feels like the weighting of speed vs. accuracy is completely arbitrary, whats up with that?

The way I see it, IQ tests are just a sample of all your cognitive abilities. But because no one knows the nature and number of every cognitive ability in the human mind, all psychologists can do is select an arbitrary sample that is as large and diverse as possible. Luckily, all cognitive abilities appear to be positively correlated and by prioritizing cognitive abilities that correlate well with every other known cognitive ability, the total score presumably predicts unidentified cognitive abilities also.

The reader continues:

I don’t think any IQ tests score you on speed, but most of them have a time limit that’s not long enough for the average person to complete it, giving people who can finish it faster an advantage.  While there’s certainly a correlation between one’s speed of reasoning and quality of reasoning, they seem to me like ultimately seperate qualities, yet IQ tests tend to lump them into one. For example, who is smarter, a person who finishes a test in time with 60% accuracy and is confident he got everything right, or a person who finishes half the test before the time runs out so gets a 50% but could have gotten 100% if given twice the time.

Some IQ tests do provide subscores for the so-called speed factor (the Processing Speed index on the WAIS-IV for example) but most timed IQ measures use speed as a convenient way of increasing the test’s difficulty, not because they’re trying to measure speed per se.

For example on a Wechsler spatial subtest, 15% of 16-year-olds are capable of solving every item within the time limit (which is a few minutes for the hardest items), but by giving bonus points to people who can solve the easy items within 10 seconds and the hard items within 30 seconds, only 0.1% can get a perfect score. So the use of time bonuses increases the test’s ceiling by two whole standard deviations without going to the trouble of creating more difficult items that would make the test too long.

When time bonuses are not given, a lot more people score perfect but the rank order of people remains virtually identical especially at age 85 to 90.9 where the correlation is 0.99! (WAIS-IV Technical and Interpretive Manual, Appendix A). The correlation is slight lower at younger ages (but still 0.93+) because of all the ceiling bumping when no time bonus is given. Such absurdly high correlations prove that when used judiciously, time bonuses merely add ceiling without changing the nature of what is being tested.

On group administered tests, the time limits not only don’t typically affect the rank order of scores much, but they don’t even increase the ceiling much. Arthur Jensen has reported that that when the Otis verbal’s time limit was increased by 50% (45 minutes instead of 30) , the average score only increased by 1.5%. When the Otis non-verbal time limit was increased by 150% (30 minutes instead of 20), the average score increased by only 1.7%, and when the Henmon Nelson increased its time limit by 67% (50 minutes instead of 30) scores increased by 6.3% (Bias in Mental Testing, 1981).

The notion of quick superficial thinkers vs slow profound thinkers is probably fallacious. People who do well on the Wechsler Processing Speed index actually have slower brains than people who do well on the untimed Raven Advanced Progressive Matrices. Once you control for general cognitive ability (the g factor), psychometric speed and has no correlation with reaction time (The g Factor by Arthur Jensen).

The reader continues:


My second question, kind of related to the first: what actually is the g-factor? The idea is that g is a construct that links the performance of all cognitive tasks, but how can you actually calculate such a number? It makes sense to me to, say, measure the g-loading of a sub-test with respect to a full IQ test, but how can you measure the g-loading of an entire IQ test? Is it just the test’s correlation to all IQ tests? Wouldn’t that just measure how close the test is to the average IQ test? Also, the idea of a g-factor would seem to require a definition of what’s “general”, and that doesn’t seem like something that can be done empirically. Like if we lived in a society entirely base in music, then would the g-loading of piano skill tests be higher than math tests? And do tests of speed or tests of quality have higher g-loading? Then again, I haven’t read up on much of the literature so I could have some major misunderstandings.

In theory g is the source of variation that all cognitive abilities have in common so the larger and more diverse the battery of subtests from which g is factor analytically extracted, the more likely g reflects something real (as opposed to an artifact of test construction). If we lived in a society based on music, everyone might reach their biological potential for piano playing, while math might be esoteric trivia, so the former may indeed become more g loaded than the latter in that context. However the g loadings of novel tasks, that are not practiced in either our society or the musical one, should have similar g loadings in both.

Happy 40th anniversary to Friday the 13th!!!!!

Tags

, ,

WARNING! THIS ARTICLE CONTAINS SPOILERS

Exactly 40 years ago today, the original Friday the 13th was released (May 9, 1980) and if you haven’t seen it, you need to stop reading, go watch it, and come back.

Friday the 13th (1980) was not the first slasher film (there was Psycho in 1960 and perhaps even earlier ones), but it’s the film that launched the 1980s slasher craze and the iconic Jason. The film was inspired by the incredible success of Halloween and the film makers admit they were deliberately trying to rip that movie off. Indeed Halloween was ripped off so many times it launched its own sub-genre (stalker films). But because Friday the 13th was the first Halloween rip-off, it got to eat all the low hanging fruit Halloween neglected to pick and perfected the stalker film template.

So the first thing you need when ripping off Halloween is your own scary day to set your film on. Since October 31st was already taken, they got the only other scary day on the calendar (Friday the 13th). Later Halloween ripoffs would have to settle for valentines day, graduation day, Christmas, Prom Night, or some some character’s random birthday (Happy Birthday to Me (1981))

The next thing you need when ripping off Halloween is a location where teenagers are isolated. Since Halloween focused on teenagers babysitting in suburbia, Friday the 13th picked counselors at a summer camp and the forest is perhaps even scarier than the suburbs. Again because Friday the 13th was the first Halloween rip-off, it got the prime real estate.

Once you decide to set a slasher film at a summer camp, you need a reason why someone would be killing camp counselors, especially ones who have sex. In Halloween the killer just killed because he was the boogeyman (which made sense since it was Halloween) but what’s a reason why someone would want to kill camp counselors? The most obvious reason is that having sex instead of doing their job caused something terrible to happen? What’s the most obvious terrible thing that can happen at a camp when counselors are distracted? A child drowning. And who would be most upset that a child drowned? The child’s mother. But we need a reason why she’s killing on Friday the 13th because we have to name our film after a scary day to rip-off Halloween. Oh I know, because that’s the child’s birthday, and since it’s a bad luck day, the child drowned.

So see how the entire story just flows organically from the fact that it’s a slasher film set at a summer camp. Unlike other slashers where everything feels so contrived, in this film everything just flows seamlessly because they had the luxury of being the first Halloween rip-off and the first summer camp slasher.

This was the first U.S. slasher to give us real graphic gore.

In Kevin Bacon’s first role, his character dies from an arrow through the neck from the killer hiding under his bed

When you see such brutal violence using so many different weapons (axes to the face, arrows stabbing you through your matress from under your bed, machete, knife) and corpses being thrown through windows deep in the woods beneath a full moon and rain storm, you picture the killer as some big hulking man. You picture Jason, the big bald hockey mask wearing brute from the Friday the 13th sequels, but what made the original so brilliantly ironic was that the killer is finally revealed to be all American mom played by the 1950s icon Betsy Palmer.

She has the best dialogue in the film, The oh so subtle way she admits her son was mentally retarded without saying it.:

“You see Jason was my son, and today is his birthday” says the killer creepily, explaining both her motive and the film’s title in one brief sentence.

Palmer is incredible in the role. On the one hand she’s the June Clever type mom who had freshly baked chocolate chip cookies for you when you came home from school, the type of woman who would lead the local girl guide troop and thus knew her way around the woods. And yet for all her blonde 1950s aging femininity, she is physically menacing in the part. Wearing multiple sweaters and long johns under her pants to make her character as bulky as possible, the final chase scene is a tour de force. The heroine, Alice, throws a ball of string. The killer punches it away. Alice throws an object too small to see. The killer deflects it with her chin before slapping the hell out of Alice. The two women end up wrestling on the beach until Alice finally prevails in the most dramatic and graphic killing in the entire film. A Shakespearean beheading that put special effects master Tom Savini on the map, decades before CGI made his skills obsolete.

But if all that wasn’t great enough, what makes this the greatest slasher of all time is that one last jump scare that no one saw coming.

Imagine being in the theater in 1980. Viewers would have been jumping out of their seats, their popcorn flying through the air and landing on other viewers.

And yet unlike so many final jump scares, there’s nothing contrived about this. It flows naturally from the story: Alice had just beheaded a mom who was avenging her son’s drowning, so by the logic of horror films, it’s only natural that the child’s drowned corprse would pull her into the lake to avenge mom’s beheading. I love the Shakespearean way it comes full circle.

Of course being pulled into a lake by a child who drowned 23 years ago turns out to just be a dream on Alice’s part. Or was it? The film makers claim they had no intention of a sequel and the final jump scare was all just a dream, but what made the ending so brilliant is we just don’t know. When the film broke box office records, a sequel appeared a year later, and is set just a couple months after the events of the first film.

What makes the start of part 2 so creepy is that Jason has apparently followed Alice from the lake at the camp, to her home in the suburbs but instead of being the drowned child she “dreamed” pulled her into the lake a couple months ago, he’s a big grownass man.

So if Alice had merely dreamed Jason had returned to attack her, why did her dream come true a few months later? But if her dream was true, why was he a child in the dream and a grown man a few months later? Was he a flesh and blood ghost that had catch-up growth to makeup for the 23 years he had been dead a la Toni Morrison’s Beloved, or had he never really drowned at all, but survived in the woods for those 23 years?

I prefer the latter interpretation: Being attacked by a child’s drowned corpse was just a premonition that Jason would indeed return to attack her for killing his mom, but in reality he was a grown man, not a drowned child’s corpse, because he had actually survived. It’s also possible that Jason may indeed have pulled her into the lake, but perhaps she thought she was being pulled in by a child’s corpse (and not a grown man) since she’s thought Jason drowned as a child.

But I love how the series has two interpretations: A supernatural one for the lower IQ and more schizophrenic fans, and a realistic one for the higher IQ and more autistic fans. By part 6 Jason is indisputably undead (they literally dig up his grave and open his coffin) but not even the film makers can agree on whether he was undead in the earlier sequels.