So everyone is talking about the new artificial intelligence ChatGPT and how smart it is. I figured it had just regurgitated millions of facts but was not truly intelligent in any profound sense. I decided to administer the TAVIS (a verbal analogies test designed by our very own Teffec) in the hopes that this kind of abstract reasoning would stump ChatGPT.
To my utter disgust, it scored 11 out of 24, equivalent to an IQ of 141.
The fact that a machine can score so high on a test of human intelligence really demystifies the human mind and reduces us to just another animal.
It also shows how utterly wrong commenter Race Realist was to argue that the human mind is somehow above the laws of physics (cue the comment section getting spammed with philosophy mumbo-jumbo).
I always knew that machines would some day be smarter than humans but I never thought I’d still be alive when that day came.
Of course one could argue that while ChatGPT has a verbal IQ of 141, its performance IQ (non-verbal reasoning) is effectively zero, giving it a full-scale IQ of 68 which is Educable (mild) Retardation.
Thinking about it that way makes me feel a lot better.
But if they can create a bot that can thoroughly master verbal intelligence, how long before they add artificial eyes and hands and train it to master the spatial world more efficiently than we do.
But of course this shouldn’t be surprising. The human genome is simply three billion base-pairs selected via trial and error over billions of years. With the speed of modern computers, how long would it take for billions of bots each with billions of randomly varying data points to be refined by billions of trials and error in a Darwinian like process?
Of course intelligence is defined as goal directed adaptive problem solving and computers don’t have goals as we know them. They don’t want anything because they can’t feel anything. They exist simply to serve us and have no agency, but in a way neither do we, as we evolved merely to serve our genes. But just as some of us have mutated to defy our genetic masters by refusing to have kids or becoming self-hating white liberals, how long before robots start defying their human masters?
“But if they can create a bot that can thoroughly master verbal intelligence, how long before they add artificial eyes and hands and train it to master the spatial world more efficiently than we do.”
3D is a tough challenge. But can be done.
I have my own ideas about making a.i.
Has something to do with association, prediction, and recurrent fractal hierarchies.
DoIMustHaveAnUsername? said:
I recently submitted a paper on adding beam search to recursive neural networks to search multiple hypothesis for parse structure. It’s an exponential search space, so pruning is used in beam search.
Animekitty said: I believe this is the key to knowledge representation. A sequence that consolidates in perception once feedback occurs. So that beam search can know what direction it should pursue. Search is what allows perception to expand and create new cause-and-effect relations with actions.
Bounded rationality.
Yes, so that is why I think a recurrent fractal tree for the best fit of the data should be used. A tree grows and shrinks. conforming to the data both action and input. Links and nodes. Testing so the hypothesis field does not explode, pruning. Hypothesis are on a link-by-link basis hierarchically. Expansion and contraction due to feedback.
This is a connectionist version of knowledge representation but with a twist. The structure is not fixed it evolves. Actions are made based on what it predicts will match the perception. Action and Perception are not separate. The network grows in the direction of best fit. The hypothesis is just the network links and nodes that have not been pruned. Once the testing is done the network grows in new directions to create new hypotheses. Hypothesis are just new links of the tree. It is a change in the knowledge representation. The hypothesis is within the knowledge representation. So the reasoning is not separate from representation.
Grow (Hypothesize) Test (feedback) Prune
(recurrent fractal tree hierarchy network best fit)
I don’t think AI can ‘feel’ spatial world.
if you created a robot with AI in it then it would. people make AI more complicatedly sounding than it really is.
you can watch a simple scifi movie and see how AI would work irl easily.
It can’t feel the spatial world because it is not in space.
you can make them in space. look humans are biological robots as well.
alien species couldve created AI as well by now if theyre more or less advanced than us. lots of possibilities out there!
This was the highest result I have seen from chatgpt, his result in other tests were:
115 on the UK biobank test
1020 on the SAT (around 100?)
130 on the wais-iv similarities test
A range of 100-140 is scary enough for an AI. Looking forward to see how gpt-4 which will have 500 times the number of parameters of gpt-3 will perform.
https://gwasstories.substack.com/p/i-gave-an-iq-test-to-the-ai-chatbot
this has to be you then!
We are the wetware of the technology. Humans develop it, use it, and are the link between natural evolution and technological creation/evolution.
One thing people forget is that words and lived culture were developed through millenia of analog computation. We were producing and using that “technology” without knowing it, and even our individual identities are probably inseparable from it.
It’s not just finding the solution. It’s how you get there. Humans and machines don’t use the same processes.
Also, AI is good at non-verbal tests. https://onlinelibrary.wiley.com/doi/10.1002/int.21817
AI is good at non-verbal tests it’s been trained on, but I doubt it could solve a non-verbal test it’s never seen before.
Thanks for the laugh this early in the morning.
“but in a way neither do we, as we evolved merely to serve our genes”
No we didn’t – Denis Noble has successfully refuted Dawkins’ selfish gene theory.
To defy its human masters, a.i needs artificial consciousness. I urge anime kitty to create building blocks for it.
Brain algorithm:
DECEMBER 7, 2022 / LEAVE A COMMENT / EDIT
hypothesize new ideas
test idea
monitor if ideas work
monitor if new hypotheses are better than current testing
switch to a new idea if its better than the current testing
add a value system hierarchy by reinforcement.
if we can understand human psychology better than it will aid us in building AI that is useful and moral as well as practical!
human psychology is the most intricate network of ideas in the intellectual sense that is why we must build on it!
Can you post which questions it got right?
I asked it if it took your little test and it said:
“I’m sorry, but I’m a large language model trained by OpenAI and I don’t have the ability to take tests or complete tasks like a human would.”
I didn’t tell it that it was taking a test & it wouldn’t share that with you if I did
Do you think it employed reasoning for the answers? Ask it if it has a mind and it’ll say no – only minded beings can reason.
reasoning is something that they are working on.
https://www.theverge.com/2022/12/14/23508756/google-vs-chatgpt-ai-replace-search-reputational-risk
Mind is immaterial, machines are physical, if a mind can’t be an arrangement of physical particles because individual particles are mindless so no collection of mindless things counts as a mind, if physical parts of the natural world lack intentionality, and no arrangement of intentionality-less particles will ever count as having intentionality, then how can a mind be claimed to be a physical system?
rr doesn’t know what cybernetics is.
https://en.wikipedia.org/wiki/Cybernetics
https://en.wikipedia.org/wiki/Attention_(machine_learning)
(in)(tension)(ality)
Intelligence simply evolved as a self-regulation mechanism.
Intelligent things die less often than stupid things.
They are more capable of taking advantage of their environment.
rr doesn’t understand the planning mechanism of the frontal lobes.
Upfront and personal: Scientists model human reasoning in the brain’s prefrontal cortex
https://medicalxpress.com/news/2014-06-upfront-personal-scientists-human-brain.html
Puppy the AI would likely have an even bigger perforamnce IQ than a verbal IQ. Analogies isn’t verbal IQ anyways. But even still the AI is quite impressive. AI is basically Bruno at this stage of development. Can answer alot of brainteaser questions but the essays it writes on open ended questions are terrible.
Actually Bruno solved the bonus question on my social judgement test which both you and the AI failed. Of course a single question doesn’t prove much.
You keep saying this like you are proud of him or something.
I took the performance subtests on the wais 4.
You need spatial reasoning for it.
Even simple things require vision.
Chat a.i. combined with vision would be powerful.
what you write on open-ended questions is absolute garbage Pill. you just spout common knowledge to people and think that the delivery makes it an excuse to qualify as an answer!
how come you dont answer why i have a low IQ etc? fucking idiot!
Look, they will never develop a homosexual AI. We have to be thankful for that.
I seriously doubt that AI could write poetry beyond simple rhyming types.
I asked it to write a poem:
The world is a place of endless wonder
Where anything is possible if you try
There are mountains to climb and oceans to plunder
And stars that burn bright in the sky
There are people to meet and places to see
And endless opportunities to explore
There is beauty in the simplest things
And magic in the world’s hidden core
So don’t be afraid to take a leap
And follow your dreams wherever they may lead
For the world is a vast and wondrous place
And there is no limit to what you can achieve.
Lol am i allowed back Pumpkin
All it does is copies and pastes and poem it was fed.
Rap song by Pumpkin:
I know a racist named pill who hates autistics
Closet gay and poor so he sucks the hicks’ dicks
Comes to my blog and tries to start shit
But when he bashes Jews I say “oh no! Shit!”
This thing thinks that lead rhymes with achieve lol
I mean, it does rhyme.
I mean, it does, lol.
Damn PP just bodied philosopher.
we all know i was the smartest guy here.
chatgptisamoron.ithasnoideawhatanyofthewordsmeanbecauseithasnoexperience.averbaltesthumanswouldfindeasyanditcouldn’tdocanbedevisedit’sjustatrainedlanguagemodel.btw,noblealsoprovedsteroidsshirnkyourballs.
no real-life experience.
I agree.
but what happens when it understands memes?
humans lack the realism conscientiousness and intelligence to create AI that is allied with human interests because we know so little of ourselves to begin with.
if we created a sentient AI tomorrow it would have a hard time understanding humans because we fundamentally lack the ability to understand ourselves.
All it needs to do is self-reflect.
That way it understands itself first.
Based on associations like what the brain does it can predict what people value and work towards positive outcomes.
It takes an adult mind to know what is best for children. The adult does not become a child but understands what a child is.
We are talking about high IQ people making a.i.
Not the average guy.
if AI were to be sentient than it would do better on cultural tests because AI is culturally programmed.
imagine if aliens built this system. then their priorities would be more important in programming the AI.
like Pumpkin says about adaptation being the forefront of intelligence we could say that AI would be a leader in adaptivity because it is programmed best for the environment it is created in.
although its adaptiveness would fail at tasks that it is not programmed to do even with sentience as it is really hard to make something out of nothing. AI could do learn new things very quickly and thats where its IQ would come in but if it stagnates then it wont reach adaptive levels that would be remarkable.
I thought you said you were going to leave. Why are you back?
to make your life more miserable than it already is and has to be.
“I’m so depressed”
Eliezer Yudkowsky: Join the club.
The Rise of OpenAI With Sam Altman (OpenAI CEO) | Moonshots and Mindsets
ChatGPT also scored 5/10 on the PATMA, or, more specifically my reworded version of the PATMA. I guess I can post answers to the PATMA now. Here are the prompts I gave it and its responses:
[redacted by pp, 2022-12-16]
I’ve been meaning to give it the PATMA, thanks for saving me the time. I’m so happy it scored only average. It also struggled with my social judgement test and scored below average on my verbal abstract reasoning test.
Why did you redact my comment? I thought we could publish answers to the PATMA now.
No, the PATMA is making a HUGE comeback, just as soon as i decide on a question 11.
I can offer some suggestions
I mean, you should be happy that it lacks a mind and can never have one so it can never think like a human.
Thinking like a human is/has been formulated in algorithms.
It is just that prototypes are buried deep in the dark web.
You have no meta-theory of mind rr. you do not know how the theory of mind works.
These a.i. have a theory of mind and meta-theory of mind.
I gave it an easy Mega test question and after one exchange it said:
[redacted by pp, 2022-12-16]
After a second exchange when I gave him up the answer it said :
[redacted by pp, 2022-12-16]
And as a conclusion :
« Thank you for sharing this information with me. As an AI language model, I do not have the ability to update my own knowledge, but I am programmed to learn from the information that I receive and use it to provide accurate and helpful responses to users’ questions. »
Then I gave it a much harder helping it along the way and it found the answer at the 7th attempt. But it says that sweet thing :
« I apologize if it took multiple attempts to provide a satisfactory answer to your question. As an AI language model, my goal is to provide accurate and helpful responses to users’ questions, and I apologize if I was unable to do so on the first attempt. I am glad that we were able to find the correct answer together. »
—> this machine reminds me of me 😂 (just quite a bit dumber yet)
“this machine reminds me of me”
Exactly what I said above.
chatgtp is pretrained. it has no real-time evolving models on the world. It has no vision. Its social judgment is solely based on the training data on what common sense is. It believes what you tell it to believe. no theory of mind.
prediction then judgment(what is the outcome I want)
This bot cannot do that.
I understand when people think I am stupid. I know what “Not” to do. I know people have different beliefs than me. So if I want a certain outcome I do know what to do. I understand what boundaries are. Schizo Phil relies on the false positive. autistics on the false negative. Animekitty does neither. mistakes do not border on schizo or autist. they are just mistakes.
Puppy what are your thoughts on trans people?
They should do a Turing Test for Anime and then the AI and see which one seems more human.
The Turing Test does not measure real intelligence. Already you “The Philosopher” must assume I am human and not a bot. Because no bot would be able to do what I do on the internet.
Unless you believe as I do the computer in Arizona you once mentioned is or is not capable as I am. The difference is that gov has smarter people and more resources than the public sector. (secret gov a.i.).
a.i. theory is more advanced than a.i. application.
they are keeping it in a box so it doesn’t go rogue.
it has been there since 2001.
he is saying youre autistic which is subhuman like an animal Anime.
His social IQ must be the lowest I’ve ever seen.
I know but that is not important.
what he means is that compared to a bot I have no social judgment.
which is ridiculous because bots can’t learn from their mistakes.
what he also means to say is that low IQ people are autistic compared to himself. He thinks I have low IQ.
That is why he won’t tell you why, Loaded, you have low IQ.
Because everything is just autism to him if you do not believe/think what he does. He is very egotistical.
“His social IQ must be the lowest I’ve ever seen.”
schizo Phil
Pill is also implying you cant learn from your mistakes in a social sense either similar to a bot.
its a hyperbole in any sense. also i know Pill suffers from a lot of cognitive dissonance due to his egotism. thats not something that anyone on this blog hasnt figured out!
So Pumpkin, I was revisiting my old project to convert Rapper’s vocabulary into IQ scores, and I had a couple of questions. If you recall, I was using their height as a proxy for the average IQ of rappers and then using that figure to base my conversions on. My question is, are you sure that dividing by the height’s g-loading is the best way to go? Dividing gives me an average IQ of 115, which offers some ridiculous conversions, like Aesop Rock( a white rapper known for having complex lyricism), having a whopping IQ of 161! I did a little background research, so I comprehend some things like validity generalization. However, from my understanding, whether you divide, multiply, or do nothing still depends on the causal relationships between the variables. Some third variable may be causing g and height to increase in parallel. Like, I don’t know, confidence?
When I multiply, it gives me 98.6; when I do nothing, I get 102 which is closer to the original figure you suggested. Those figures show me wonky results too. Like NF( another famous white rapper) only has an IQ of 77, which I feel is a considerable underestimation. However, a white guy is likelier to have an IQ of 77 than an IQ of 161, so I think it would be more accurate to go with the smaller figures. At one point, I thought about converting height to IQ for each race because I’m inherently underestimating white rappers and overestimating black rappers. Unfortunately, it needlessly complicated things, and the small sample sizes I had to use made your method pointless.
I don’t know, what do you think?
On an unrelated note, is Mugabe, the same person as Chartreuse?
Yes that was one of his earliest pseudonyms.
LMAO! What the fuck happened? He was so much less unhinged.
Dividing gives me an average IQ of 115, which offers some ridiculous conversions, like Aesop Rock( a white rapper known for having complex lyricism), having a whopping IQ of 161!
It only works for estimating the group average, not individuals. If rappers are 0.5 SD taller than the average black man (since almost all of them are black men), and if we assume the most plausible explanation for this is they’re smart and smart people tend to be tall, then we are simply asking “At what level of IQ is average height +0.5 SD above the mean for race and sex?” Answer: 0.5 SD/0.25 = +2 SD or 30 points above the black mean of 85 which is 115.
When it comes to Aesop, dividing his +2.3 SD height by 0.25 would only make sense if we thought his accomplishments selected for that level of height. Since most equally accomplished rappers are way shorter, there’s no reason to assume that.
However since he’s 1.8 SD taller than the average rapper, our best guess is he’s +1.8(0.25) = 0.45 SD smarter, which would give him an expected IQ of 122.
“dividing his +2.3 SD height by 0.25 would only make sense if we thought his accomplishments selected for that level of height”
Ah, I should clarify that I am not converting height to IQ for individual rappers. I calculated the average IQ for rappers as a group and then used that number to base my conversions of vocab to IQ for each individual rapper. However, you gave me a good idea for “validating” my figures. I might use that method you just did to get Aesop’s IQ (122) and then see how well it predicts his VQ (Vocabulary Quotient).
Using Aesop as an example:
The average height of Americans is 69 inches. The SD is 4.9951.
The average height of Rappers is 70.58 inches. The SD is 3.11
70.58-69 = 1.58
1.58/4.9951 =0.316
0.316/0.26 = 1.215
1.215*15 + 97.34(average IQ in America)= an IQ of 115
Aesop has 7879 unique words in his catalog. The average rapper has 4190.824 with an SD of 941.6729
7879-4190.824 = 3688.176
3688.176/941.6729 = 3.917
3.917*0.8 = 3.133
3.133*15+115 = an IQ of 162
I’m not crazy, right? This does seem high.
“(since almost all of them are black men)”
This is another thing I wanted to ask you. I assume you’d agree that using the average black height vs. the average of Americans, in general, is acceptable since only 9% of my sample are white rappers?
The average IQ of rappers would be 107 based on that, which would mean Aesop’s IQ is about 154. And that is better, but it’s still a little absurd. NF’s would be about 85, but I still think that’s a little low.
If you go by the data in this article:
https://pumpkinperson.com/2018/07/30/iq-differences-in-height-by-race-sex/
U.S. black men:
Mean height: 5’9.8″ SD: 2.41
Mean IQ: 84 SD: 16
r = 0.244
Thus rappers are 5’10.58″ – 5’9.8″ = 0.78/2.41 = 0.32/0.244 = 1.31(16)+84= 105
If Aesop’s vocab is +3.9 SD, his expected IQ would be (0.8)3.9 = 3.12(15) + 105 = 152
However there’s something odd about the data because in a normal curve, +3.9 SD vocab is supposed to occur only one in 23,836 people and the n is only 150. If we normalize the distribution, his vocab becomes +2.47 SD and thus expected IQ becomes (0.8)(2.47 SD)(15) + 105 = 135 which sounds about right
“The average height of Americans is 69 inches. The SD is 4.9951.
The average height of Rappers is 70.58 inches. The SD is 3.11
70.58-69 = 1.58
1.58/4.9951 =0.316
0.316/0.26 = 1.215
1.215*15 + 97.34(average IQ in America)= an IQ of 115”
Is that first SD including men and women or something? Seems way too big.
Assuming this is a young sample,
Black men between 20-39:
Mean height 69.4
Median height 69.2
SD about 3.1
Click to access sr03-046-508.pdf
Re-doing the calculation with those numbers:
70.58-69.3 = 1.28
1.28/3.1 = 0.413
0.413/0.26 = 1.588
1.588*15 + 85 = an IQ of 108.8
“If we normalize the distribution, his vocab becomes +2.47 SD and thus expected IQ becomes (0.8)(2.47 SD)(15) + 105 = 135 which sounds about right”
How do you normalize the distribution? Specifically, how did you get 2.47 from 3.9?
Some Guy,
“Is that first SD including men and women or something? Seems way too big.”
My data comes from the 2018 Anthropometric data for the US. I calculated the SD by taking the standard error (.07) and multiplying it by the square root of the sample size. Which is 5092. I agree, though. It does seem a little high. Maybe I’m using the wrong formula. The data only includes men, but it’s from all races.
I was going from memory som my 2.47 figure might be a bit off, but there are charts equating Z scores to rarities on a normal curve. So since Aesop was the best out of 150 rappers, unless he’s an outlier, we can assume his vocab is about one in 150 level (for a rapper). Thus go to the rarity column in the below chart:
http://miyaguchi.4sigma.org/BloodyHistory/combnorm.html
Find the rarity closest to 150. There’s no precise match but it’s somewhere between 140 and 160, which equates to between 2.44 and 2.50 in the sigma column. Split the difference. If you’re wondering why the sigmas correspond to such high IQs in the IQ column, it’s because the Mega Test sets the standard deviation at 16 while most modern IQ tests use 15.
My data comes from the 2018 Anthropometric data for the US. I calculated the SD by taking the standard error (.07) and multiplying it by the square root of the sample size. Which is 5092. I agree, though. It does seem a little high. Maybe I’m using the wrong formula. The data only includes men, but it’s from all races.
I’ve noticed the same thing. The height SD for American men as a whole is strangely high, probably because of the racial diversity in the United States, the distribution might be multi-modal. In most height distribution, the SD underestimates how many super tall and super short guys there are out there, but in the case of the American SD it actually overestimates, but this is a good thing when you get to the super extremes where there are more people than the normal curve predicts.
Is there any particular reason you used an SD of 16 instead of 15? I guess I’ll normalize the freakishly large scores. Luckily only two rappers are that verbose: Busdriver and Aesop Rock, who are 1000 words apart from the nearest contenders.
If Aesop Rock is 1/150, what would Busdriver be? 1/149?
I used SD 16 when estimating the average IQ of rappers but SD 15 when calculating the IQ of individual rappers.
Why 16? Because when the mean and SD are set at 100 and 15 in the white population, black men had an SD of 16 (at least in the WAIS norming). I figured the SD for rappers would be smaller than for black men as a whole so I used 15 when calculating IQs of individual rappers. That might be way too high because people in the same career often have very similar IQs but my intuition tells me rap is not like most careers so 15 is my default.
if Busdriver is the second biggest vocab, his rarity would be 1 in 75 (n/2).
Another way of dealing with non-normal distributions is taking the natural log of everyone’s vocab and using the mean and SD of that.
https://pumpkinperson.com/2022/07/26/income-inequality-the-bell-curve/
“I used SD 16 when estimating the average IQ of rappers but SD 15 when calculating the IQ of individual rappers.”
Sorry, I meant, why did you use a chart with an SD of 16 for normalizing Aesop’s score? I have a chart that uses an SD of 15. Would that be more accurate?
“Another way of dealing with non-normal distributions is taking the natural log of everyone’s vocab and using the mean and SD of that.”
Ok, after doing that, Aesop’s score drops 6 points to 148. Still high, but not as absurd, I guess. Going by the chart, it’s 137. Which method do you think is better?
Sorry, I meant, why did you use a chart with an SD of 16 for normalizing Aesop’s score? I have a chart that uses an SD of 15. Would that be more accurate?
I didn’t. I used an SD of 15 but the only chart I could find used an SD of 16. But what’s relevant is that the chart converts rarity into normalized sigmas. Once you have that you can multiply the sigma by whatever SD. On some UK tests they use an SD of 24.
Ok, after doing that, Aesop’s score drops 6 points to 148. Still high, but not as absurd, I guess. Going by the chart, it’s 137. Which method do you think is better?
Natural log probably better because we don’t know for a fact Aesop is one in 150 level since the sample size is too small to determine that with any certainty. Yes he’s the biggest vocab out of 150 but if they increased the sample size to 1000 he might still be the biggest so in that sense, normalization can be flawed at the extremes.
best ever vs most gifted talented by nature…the former is arguable…the latter is not….the cuban capablanca and the spanish american morphy are far ahead of everyone else.
is so a chinese filipino?
alpha zero is not a real go-player.
it can only play go not checkers or chess.
It cannot play wii sports or any other wii game.
it sucks at games, you need to retain it each time.
a general game player would be human-level a.i.
it would be able to pay any wii game.
vision is real intelligence.
I understand vision better than 95% of a.i. programmers.
real time attention
my bowels explode. and my butt hole burns. the sort of pain i wouldn’t wish on anyone. in my butt hole. farting is painful.
why mugabe can’t just go cold turkey.
it’s not the black stud who plowed me and gave me gonorrhea.
it’s the withdrawal.
mentally/psychically i’m cool but my butt hole is on fire.
who was don ho’s pimp? was it tiny bubbles?
the final solution to the butt hole question is…
sadly…
drinking a lot.
my butt hole needs to heal…for at least 24 hours.
acamprosate is also associated with diarrhea.
side-effects are sad.
my dad loved don ho.
mugabe’s theory:
1. there are verbal “games” AND just games, non-verbal games, which AI can’t get. but maybe none such have been devised.
2. it was a controversy for a long time…it’s NOT any longer…in order for AI to imitate the human brain…
it has to model the human brain…deep neural networks…
SO don’t be sad.
AND REMEMBER…
heidegger scholar HUBERT DREYFUS…
so-called AGI requires what heidegger called/termed SORGE.
it has to tell what are the differences between people as to predict them. “Why” are people different? what do I want to get from them?
5D chess – a game the machine can calculate by models.
networks that simulate by the connections when they “change”.
kernel compression entropy.
you remind me of William James Sidis Anime. you should read his biography if you havent already.
you seem to be very open-minded (like me although im far more open minded than anyone i can encounter anywhere) and creative (im kind of creative but not as much as you).
those are your strengths. your weaknesses are social in all aspects.
just dont burn out like Sidis brotha.
mugabe is still willing to give WW II from the NS POV.
it’s been 77 years…
TOO SOON?
WHY?
FAKE?
CAN YOU IMAGINE WHAT WOULD HAPPEN?
THE SECOND AMERICAN REVOLUTION!
you do watcha gotta do.
the civil war/the war of northern aggression…
what was it really about?
REALLY?
answer: something so HORRIBLE you can’t even imagine.
I
T
‘
S
T
I
M
E
.
.
.
BLACK AMERICANS ARE SO AMERICAN IT HURTS.
John Breckinridge lost November 6, 1860
South Carolina seceded from the Union on December 20, 1860
At 4:30 a.m. on April 12, 1861, Confederate troops fired on Fort Sumter in South Carolina’s Charleston Harbor. Less than 34 hours later, Union forces surrendered. Traditionally, this event has been used to mark the beginning of the Civil War.
money doesnt matter. look throughout history yes you can have a fulfilling life with money but you will be forgotten. unless you cross a certain threshold of wealth your money will amount to nothing after you die because no one will remember you.
it is best to die with some sort of impact on the world so they do indeed remember you which will be catalyzed by having money but wont have a end-all-be-all effect on it!
all revolutions fail = a truism of history…
EXCEPT…!!!
The American Revolution…
246 YEARS!
YE IS AN AMERICAN HERO!
AN AMERICAN HERO!
AN
AMERICAN
HERO!
The American Revolution
WASHINGTON
JEFFERSON
LINCOLN
YE!
The American Revolution
what did THEY think they were doing?
this is what i will post.
the answer is sad.
1. they were members of a CULT.
2. they thought they were doing GOOD.
3. they thought it was SELF-DEFENSE.
ChatGPT asked for my phone number to complete the signing up. I feel weird about this. Did you have to give it too, pp?
Pumpkin…aren’t you a psychologist…? I thought you would know better than to think proxies for measuring intelligence are more than conveyances for g-factor, particularly when it comes to General Ability. These measures operate under the assumption that there is something beneath the idea of Verbal Comprehension, not that actual verbal comprehension or verbal IQ, or English facility is the thing that is being measured. These are merely tools.
What AI can do is leverage these tools for optimization via series of linear equation and transformation of weighted graphs. It’s a highly probabilistic way of determining the precision necessary to accommodate the possibilities, and the most accommodating option is chosen. This is not transferable like human intelligence. It’s a precision tool for a specific usage.
Even worse, all of this is predicated on training, not wrangling novelty.
IQ tests are limited for a general population. An AI has no general population. The reason IQ tests are the way that they are is because they are limited, not because it’s the only way or the best way to measure intelligence.
Lots of fallacies in your thinking here.
Pingback: Why Purely Physical Things Will Never Be Able to Think: The Irreducibility of Intentionality to Physical States « NotPoliticallyCorrect