On the Limits of Human Intelligence

IQ as a concept (and specifically “g“) and the psychometric instruments used to quantify them has provoked fierce political and scientific debate for decades. The political debate tends to be heatedly emotional and revolve around the inescapably inegalitarian societal implications of crafting policy (education, public health etc.) in light of a wide spectrum of IQ scores being unevenly distributed through the population. Scientific debate tends to be more focused on defining or identifying the parameters of intelligence, the relationship between physical brain structure, cognition and human consciousness,  heritability, neuroplasticity, the accuracy of psychometric instruments and more specialized topics beyond my ken.

What’s usually seldom disputed by scientists is that large differences in IQ are significant and that a very, very small number of individuals – the top 1% to .0001% of the Bell Curve, have unusually gifted and varied cognitive capacities.  It is technically more difficult to measure people who are such extreme outliers with accuracy as their intelligence might very well exceed the parameters of the test. Stephen Hawking’s IQ is frequently estimated in the media to be in the 160’s and Albert Einstein’s in the 150’s but those are speculative guesses. Most of the people touted as being “smarter than Einstein” with astronomical IQ scores, like Marylin vos Savant or Christopher Langan do not (for whatever reason) produce any tangible intellectual work comparable to that of Stephen Hawking, much less Albert Einstein. Maybe we really ought to use that cultural comparison with greater humility until there’s a better empirical basis for it 🙂

[If you are curious what the extremely smart do think about, browse the Noesis journals of The Mega Society]

It is being asserted that any evolutionary improvements to human intelligence are apt to come with (presumably undesired) tradeoffs or deficits. That we are “bumping up against” our “evolutionary limits”. I’m not qualified to evaluate that hypothesis, but it’s assumptions are not stable as advanced societies are already radically changing their cognitive environments as well as approaching the ability to directly manipulate our genetic legacy. Whether it is Kurzweil’ssingularity” or not matters less than these things change the “natural” probability of our evolutionary trajectory. A one in a billion random genetic mutation is no longer so if you can design it in a lab.

How much higher could we push cognition? Or could we expand the existing range by adding a new dimension of senses?

Why would a dictatorship not bound by ethical scruples not do this, even at considerable cost to the individual subjects of such experiments, in order to systematically harness the results of “a genetic arms race” for the benefit of the state? Though a growing body of supersmart people would eventually become difficult to control if your secret police were not intelligent enough to comprehend what they were doing .

The potential economic rewards of increasing human intelligence would inevitably outweigh any risk assessment or ethical constraints.

3 comments on this post.
  1. Pedro:

    G is thought by many to be heritable, and regression to the mean is a bad mofo…so if your dictator can make a super brain…medicrity is still going to demand it’s due, no?

  2. Chris:

    There’s also issues like memory retention, and the ability to make intuitive (correct) choices, which are not necessarily the same as raw cognative (processing) power. Very soon to arrive technology will radically change memory retention (Google Glass is essentially a back up visual memory if it’s used right), and we’ve already changed hugely how we retain data due to our personal devices (can you remember anyone you’ve met in the last year’s phone number?).
     
    Darpa’s programmes on cognative enhancement will soon start to seep out into the public domain, indeed already have done.
     
    One thing which I personally think will be defining about many of these technologies is that they are already in reach of the everyday man on the street. Which sure ensure that in the developed world at least, the technology is swiftly distributed and doesn’t bottleneck too much. People will work around legislative attempts to constrain this development, assuming any arrive early enough to prevent it.
     
    Fascinating topic, thanks for writing about it.

  3. Another Chris:

    Zen, if you’d like to read further about this sort of thing, I can’t recommend Steve Hsu’s blog strongly enough (linked). He’s a physicist at the University of Oregon, and he actually has a sideline working in some capacity for the Beijing Genomics Institute. A little googling can find you some slides he put together on “g and genomics” that are pretty interesting.