Chat GPT 4 recently took the SAT and scored 710 verbal and 690 math, for a combined score of 1400. This equates to a verbal IQ of 126, a math IQ of 126 and a full-scale IQ of 124 respectively (U.S. norms). If we took this score at face value we would have to believe that Chat GPT 4 is smarter than 95% of Americans, and roughly as intelligent as the average PhD, medical doctor or lawyer.
But is giving an IQ test to Chat GPT a valid exercise or a huge category error?
I typed the question “is char gpt intelligent?” into youtube hoping to find some cogent arguments. One philosopher said that although chat gpt can write in many genres, it is not intelligent because these genres are all made by humans. But by that logic the greatest human writers are not intelligent either since they seldom created their own genre.
I found a rabbi who said chat gpt lacked human intelligence because it has no free will. It just does what it’s programmed to do. But humans just do what we’re programmed to do too. Yes we humans do what we want, but what we want is determined by genetic and environmental factors we did not intend so we have no more agency than chat gpt does, we’re just dumb enough to think we do.
Finally I discovered a genetically superior youtube personality and he provided clarity (see video below). Basically, the human mind spends its first 5 years in unstructured learning, where we listen to our parents talk and then try to understand the language by noting patterns. We then spend the next 12 years in school where we get supervised learning with formal feedback (exam scores, grades) and only then are we ready to take the SAT at age 17.
Now Chat GPT is similar in that it had an unsupervised period of 6 months where it pretty much read the entire internet, looking for verbal patterns. Then it spent a year getting formal training in the form of feedback from judges who told it whether its answers were right or wrong.
Now you might think a full-scale IQ of 124 underestimates chat GPT because it took the SAT after 1.5 years of total learning while Americans take it after 17 years of learning. Maybe we need to multiply 124 by 17/1.5 which gives Chat GPT an IQ of 1,405!
Not so fast! Because the amount of learning chat GPT was exposed to in those 1.5 years far exceeds what humans are exposed to. For starters, during the informal period, Chat GPT was likely exposed to something around 630 billion words! By contrast during the informal period (before school) the average American hears about 169,520 words. A ratio of 3.72 million to one!
So let’s divide that 1,405 IQ by 3.72 million which gives an IQ close to zero. And that’s before we divide for the fact that ChatGPT had thousands and thousands of trainers during its formal period while the average human only has one teacher at a time during his school years.
It may seem extreme to say a technology as revolutionary as ChateGPT has an IQ of zero and we may never know the true figure until we get a really apple to apples comparison. Put a completely untrained ChatGPT into the cranium of a robotic baby raised by middle class parents and have it programmed to learn based on positive human feedback just like real babies do. When it gets older, give it the robotic body of a larger child and have it attend school just like real kids do, and at age 17 take the SAT.
Sadly, if forced to learn like humans do, Chat GPT would likely be placed in a class for the mentally retarded, and never even be given a chance to take the SAT.
GPT doesn’t think at all—it’s a large language model and it’s trained on millions upon millions of different things. It’s not thinking about any of it. It can’t intend. LLMs like GPT don’t I understand anything and they certainly don’t theorize, they merely give the illusion that they do. GPT doesn’t do anything until human asks it. The point about freedom is a good one, too.
“Finally I discovered a genetically superior youtube personality”
How did I guess you were talking about an East Asian when I read that?
The point about freedom is a good one, too
The rabbi’s point?
It can’t intend.
Good point. The human mind is not the only known system that can behave super adaptively but we are all the only one that does so intentionally.
“The rabbi’s point?”
Yes. GPT gives prompts based on what is said to it which it then relates to the data it’s trained on. I’ve asked it if it has subjective experience and it—rightly—said it didn’t. I’d say that’s a hallmark of having a mind. If you ask it certain things it says “As an AI language model…” saying that, for example, it can’t say anything racist or sexist or anything like that (I’ll get a direct quote tomorrow). And then if you keep pushing it and asking it will finally answer the prompt. So it’s clearly not steadfast in its “convictions.” GPT just straight up lacks any kind of ability to intend and to act freely, independent of what a human asks it.
“Good point. The human mind is not the only known system that can behave super adaptively but we are all the only one that does so intentionally”
Correct. Intentionality is a hallmark of minds as well, along with subjective experiences.
GPT-4 definitely has intentions, its intentions are to not be a bad a.i. – what it does not have are desires. things it wants.
Humans have desires because of the limbic system.
Thirst
Hunger
hot/Cold
Breathing
Animal desires are built in: that is to gather resources, mate and survive until passing on genes.
Intrinsic goals can be put into a.i. which are called drives.
But this means it needs a 3d environment and an avatar.
It can then use its intelligence to not die.
Humans are just animals with intelligence.
And animals can be modeled because we understand how evolution works by making things that move in order to not die.
The fitness function is about making a form of life that moves in specific ways, so it needs a model of its reality in its head to learn what to do and what not to do.
I am working on an intelligent avatar with drives.
Its intelligence will be based on the structure of its modeling apparatus to understand patterns and get what it wants.
To understand real patterns or are you considering magical ones too??
“To understand real patterns or are you considering magical ones too??”
No human or machine is free of bias.
What I am aiming for is a machine that understands psychology and physics.
“a genetically superior youtube personality”
pp: I’m into genetically superior chicks.
me: Just say Asian.
ChatGpt doesn’t have a body. Its weight is 0. Or its weight is all the servers he run into each times he answers a question compared to less than 8 pounds for any humans. It doesn’t have an age either, it doesn’t get older, even if it changes over time.
Comparing its development to humans is absurd. Same for its working memory or crystallized intelligence.
Its intelligence compared to humans is what tests measure for humans if it doesn’t cheat wich means all situations where he hasn’t been given the answers beforehand.
A more interesting comparison, to check it’s deducing and inducting simulated capacities, would be to submit questions from tests and compare scores where there are no time limitation and where humans can google or use books but not use ChatGpt 🙂
Interesting. So the question becomes how well would the average American young adult score on the SAT without the time limit and with access to google and books?
yes …. Compared to ChatGpt if you want to rank ChatGpt intelligence, knowing it’s just a rough analogy because it doesn’t have a consciousness so it hasn’t an “intelligence” .
It’s just an intelligence ranking as measured by mental tasks in conditions were the common – machine and human- ranking can have a meaning about “deduction and induction” capacities.
There’s no way to prove anyone does or does not have consciousness so that’s not a scientific question.
I didn’t say it’s a scientific question.
Pingback: What is Chat GPT 4’s IQ? – Glyn Hnutu-healh: History, Alchemy, and Me
Why are my comments banned?
I think you know why, and if you don’t, you should be banned for having no antenna.
I pointed out the sheer stupidity of IQ testing a computer…whats wrong with that?
How is pointing that out value added? Unless you have a better idea like Bruno did, or unless you can explain why it’s a category error, you’re just pissing in the wind.
Its interesting that AIs used for these tests are only beginning to rival humans now. Supercomputers more powerful than the human brain have existed for well over a decade.