One of the longest standing assumptions about the nature of human intelligence has just been seriously challenged.
According to the traditional “investment” theory, intelligence can be classified into two main categories: fluid and crystallized. Differences in fluid intelligence are thought to reflect novel, on-the-spot reasoning, whereas differences in crystallized intelligence are thought to reflect previously acquired knowledge and skills. According to this theory, crystallized intelligence develops through the investment of fluid intelligence in a particular body of knowledge.
As far as genetics is concerned, this story has a very clear prediction: In the general population– in which people differ in their educational experiences– the heritability of crystallized intelligence is expected to be lower than the heritability of fluid intelligence. This traditional theory assumes that fluid intelligence is heavily influenced by genes and relatively fixed, whereas crystallized intelligence is more heavily dependent on acquired skills and learning opportunities.
But is this story really true?
In a new study, Kees-Jan Kan and colleagues analyzed the results of 23 independent twin studies conducted with representative samples, yielding a total sample of 7,852 people. They investigated how heritability coefficients vary across specific cognitive abilities. Importantly, they assessed the “Cultural load” of various cognitive abilities by taking the average percentage of test items that were adjusted when the test was adapted for use in 13 different countries.
For instance, here is the cultural load of the Wechsler Intelligence Test subtests:
They discovered two main findings. First, in samples of both adults and children, they found that the greater the cultural load, the greater the test was associated with IQ:*
This finding is actually quite striking, and suggests that the extent to which a test of cognitive ability correlates with IQ is the extent to which it reflects societal demands, not cognitive demands.
Second, in adults, the researchers found that the higher the heritability of the cognitive test, the more the test depended on culture. The effects were medium-to-large, and statistically significant:
As you can see above, highly culturally loaded tests such as Vocabulary, Spelling, and Information had relatively high heritability coefficients, and were also highly related to IQ. As the researchers note, this finding “demands explanation”, since it’s inconsistent with the traditional investment story. What’s going on?
Why did the most culturally-loaded tests have the highest heritability coefficients?
One possibility is that Western society is a homogenous learning environment– school systems are all the same. Everyone has the same educational experiences. The only thing that varies is cognitive ability. Right. Not likely.
The next possibility is that the traditional investment theory is correct, and crystallized intelligence (e.g., vocabulary, general knowledge) is more cognitively demanding than solving the most complex abstract reasoning tests. For this to be true, tests such as vocabulary would have to depend more on IQ than fluid intelligence. Seems unlikely. It’s not clear why tests such as vocabulary would have a higher cognitive demand than tests that are less culturally-loaded, but more cognitively complex (e.g., tests of abstract reasoning). Also, this theory doesn’t provide an explanation for why the heritability of IQ increases linearly from childhood to young adulthood.
Instead, the best explanation may require abandoning some long held assumptions in the field. The researchers argue that their findings are best understood in terms of genotype-environment covariance, in which cognitive abilities and knowledge dynamically feed off each other. Those with a proclivity to engage in cognitive complexity will tend to seek out intellectually demanding environments. As they develop higher levels of cognitive ability, they will also tend to achieve relatively higher levels of knowledge. More knowledge will make it more likely that they will eventually end up in more cognitively demanding environments, which will facilitate the development of an even wider range of knowledge and skills. According to Kees-Jan Kan and colleagues, societal demands influence the development and interaction of multiple cognitive abilities and knowledge, thus causing positive correlations among each other, and giving rise to the general intelligence factor.
To be clear: these findings do not mean that differences in intelligence are entirely determined by culture. Numerous researchers have found that the structure of cognitive abilities is strongly influenced by genes (although we haven’t the foggiest idea which genes are reliably important). What these findings do suggest is that there is a much greater role of culture, education, and experience in the development of intelligence than mainstream theories of intelligence have assumed. Behavioral genetics researchers– who parse out genetic and environmental sources of variation– have often operated on the assumption that genotype and environment are independent and do not covary. These findings suggests they very much do.
There’s one more really important implication of these findings, which I’d be remiss if I didn’t mention.
Black-White Differences in IQ Test Scores
In his analysis of the US Army data, the British psychometrician Charles Spearman noticed that the more a test correlated with IQ, the larger the black-white difference on that test. Years later, Arthur Jensen came up with a full-fledged theory he referred to as “Spearman’s hypothesis: the magnitude of the black-white differences on tests of cognitive ability are directly proportional to the test’s correlation with IQ. In a controversial paper in 2005, Jensen teamed up with J. Philippe Rushton to make the case that this proves that black-white differences must be genetic in origin.
But these recent findings by Kees-Jan Kan and colleagues suggest just the opposite:The bigger the difference in cognitive ability between blacks and whites, the more the difference is determined by cultural influences.**
As Kees-Jan Kan and colleagues note, their findings “shed new light on the long-standing nature-versus-nurture debate.” Of course, this study is not the last word on this topic. There certainly needs to be much more research looking at the crucial role of genotype-environment covariance in the development of cognitive ability.
But at the very least, these findings should make you think twice about the meaning of the phrase “heritability of intelligence.” Instead of an index of how “genetic” an IQ test is, it’s more likely that in Western society– where learning opportunities differ so drastically from each other– heritability is telling you just how much the test is influenced by culture.
© 2013 Scott Barry Kaufman, All Rights Reserved
* Throughout this post, whenever I use the phrase “IQ”, I am referring to the general intelligence factor: technically defined as the first factor derived from a factor analysis of a diverse battery of cognitive tests, representing a diverse sample of the general population, explaining the largest source of variance in the dataset (typically around 50 percent of the variance).
** For data showing that Black-White differences in cognitive ability are largest on the highly culture-dependent tests, I highly recommend reading Chapter 4 of Kees-Jan Kan’s doctoral dissertation, “The Nature of Nurture: The Role of Gene-Environment Interplay in the Development of Intelligence.”
Acknowledgement: thanks to Rogier Kievit for bringing the article to my attention, and to Kees-Jan Kan for his kind assistance reviewing an earlier draft of this post.
This article originally appeared at Scientific American