Theories of Intelligence | The Oxford Handbook of School Psychology | Oxford Academic
Skip to Main Content
The Oxford Handbook of School Psychology The Oxford Handbook of School Psychology

Contents

The Oxford Handbook of School Psychology The Oxford Handbook of School Psychology

The concept of intelligence and the tests developed to measure it have been considered one of psychology’s greatest contributions to society (Hagen, 2007). However, the theories that attempt to explain intelligence are not all alike. They have emerged from different areas within psychology, and have emphasized different aspects of human performance. They have not necessarily agreed with each other concerning the number of abilities that constitute intelligence, or the organization of these abilities. Furthermore, the theories have evolved over time.

In this chapter I will review a number of the major theories of intelligence. For purposes of clearer exposition, I will group them into four major theory types: (1) psychometric theories; (2) cognitive theories; (3) cognitive-contextual theories; and (4) biological theories. As you will see, after 100 years of study, the concept of intelligence is still a subject of debate among those who attempt to understand it.

Psychometric theories of intelligence are based upon the study of individual differences: in particular, individual differences in performance on tests that involve some cognitive component. In a typical investigation using this approach, a large number of people are administered a number of different tests of cognitive ability (e.g., vocabulary, number series, perceptual speed, general knowledge, analogies, etc.). These tests are scored, and the test scores are intercorrelated. The resulting correlation matrix can be further analyzed using mathematical techniques, such as factor analysis, to find underlying (i.e., latent) dimensions of cognitive ability (e.g., verbal ability, reasoning, etc.). These underlying dimensions usually form the basis of the resulting theory of intelligence.

It might seem that this approach to defining intelligence would lead to a single outcome on which all investigators could agree. However, nothing could be farther from the historical truth. This is due to a number of factors that were not immediately apparent, especially to the earlier investigators in the field.

First, the results of such an approach depend crucially on the sampling of tests used, and the sampling of individuals selected to perform the tests. Different selections of tests will lead to the uncovering of different sets of abilities. Further, different selections of individuals may also result in different abilities being discovered. This is because the technique depends upon individual differences in performance being present. The same tests given to a broad sample of the public will more readily show a profile of abilities than if these tests are given to Ivy League college students. The Ivy League students will have a restricted range of performance (due to ceiling effects), which will reduce the correlations among the tests. Also, similar tests given to individuals of different ages may reveal somewhat different profiles of abilities, as the distribution of individual differences in performance may change as a function of development.

Second, the results of the psychometric approach are, to some degree, intertwined with the mathematical techniques (i.e., factor analysis) used to analyze the correlational data. While factor analytic techniques were used to suggest the structure of mental abilities, some of the uncovered structure was the result of the technique rather than the data to which the technique was applied. When applied to non-cognitive data—or worse yet, random data—the techniques suggested similar structures to those obtained when applied to intellectual data. Once again, early investigators using the psychometric approach were less aware of this confound than were their later counterparts.

I now begin by describing a number of the psychometric theories that have been proposed for intelligence. I will follow the theories in historical order, because some of the later theories were, in fact, reactions to theories that had been proposed earlier.

English psychologist Charles Spearman was one of the first to develop a theory of intelligence based upon psychometrics. Spearman (1904a) had been critical of earlier correlational studies of intellectual performance, noting that the relationships they reported between cognitive measures underestimated the true relationships, because they failed to take into account the unreliability of the measures themselves (Brody & Brody, 1976). Spearman (1904b) performed his own experiment, in which he collected three measures of sensory discrimination (discrimination of pitches, shades of gray, and weights) and four measures of “intelligence” (school achievement, school achievement corrected for age, teachers’ impressions of students, and “common sense” as evaluated by an interview). He then calculated the intercorrelations between the sensory measures (an average of 0.55), between the intelligence measures (an average of 0.25), and between the sensory measures and the intelligence measures (an average of 0.38) (Gardner & Clark, 1992). Spearman then made an assumption: that the measures of sensory discrimination and the measures of intelligence should, respectively, be perfectly intercorrelated, were it not for unreliability in the measures themselves. Using this assumption, he corrected the correlation between the measures of sensory discrimination and the measures of intelligence for unreliability, and found that the corrected intercorrelation was 1.00 (Brody & Brody, 1976)! This calculation was almost certainly an overcorrection for unreliability, but it led him to conclude “that all branches of intellectual activity have in common one fundamental function” (cited in Wiseman, 1967, pp.56–57). Spearman termed this the “Universal Unity of Intellective Functions.”

The Universal Unity of Intellective Functions forms the historical beginning of Spearman’s two factor theory of intelligence (e.g., Spearman, 1927). According to the two factor theory, performance on any intellectual task is determined by two factors: g, or general intelligence, and s, a specific ability related to the particular task in question. The g factor is necessary for all intellectual tasks, though different tasks may call upon g in differing degrees. For instance, Spearman (1927) states, “At one extreme lay the talent for the classics, where the ratio of g to that of s was rated to be as much as 15 to 1. At the other extreme was the talent for music, where the ratio was only 1 to 4” (p. 75). The degree to which g is responsible for test performance is sometimes referred to as the “g loading” of a test.

It is fairly clear that of the two factors, g is the interesting one. There is only one g, but there is a separate s for every imaginable task. Furthermore, people can differ in the amount of g they possess. Thus, some students may have greater general intelligence than others.

But what is g? Spearman was not very clear on this point (Gardner & Clark, 1992). Occasionally, he claimed that g was simply that which was common to all tests of intellectual ability. This is an operational definition of how g was extracted or calculated, but it tells us very little about its underlying nature.

On other occasions, Spearman (1923) claimed that intelligence was the eduction of relations and correlates. The eduction of a relation is the ability to tell how two concepts are related: black and white are opposites. The eduction of a correlate is the ability to give a correct concept when presented with one concept and a relation: the opposite of black is white. Spearman proposed that all human cognition was dependent on three basic principles: the two eduction principles presented above, and a third, the apprehension of experience (the ability to learn from the environment). He therefore related g to his theory of the basic processes of human cognition.

On still other occasions, Spearman associated g with an individual’s physiology. Spearman (1927) noted that g might be related to neural plasticity, or the blood. As we shall see, the idea that neural plasticity plays a role in intelligence is reflected in at least some of the biological theories of intelligence.

Finally, Spearman sometimes related g to mental energy (Brody & Brody, 1976; Gardner & Clark, 1992). The g factor represented some sort of mental potentiality on which individuals differed. Spearman even hypothesized that the output of this mental energy should remain constant, with new mental activities beginning when others ceased—a sort of “conservation of energy” law in the psychological realm. While the notion of “mental energy” is the most metaphoric description of g, it seems to be the one that has taken hold in the general public. According to this lay view, general intelligence, or g, is something people are born with. They possess it in differing degrees (i.e., they are “smart” or “dumb”), and it displays itself in many different intellectual tasks. Because it is innate, it is relatively immune to remediation or improvement.

Bond theory was proposed, in slightly different forms, by Godfrey Thomson (Brown & Thomson, 1921; Thomson, 1951) in the United Kingdom, and by Edward Thorndike (1925) in the United States. Thomson called his version of bond theory the “sampling theory.” According Thomson’s theory, each mental test called upon some sample of mental operations, or bonds, for its solution. Correlations between mental tests arose due to the overlap in bonds necessary for each test’s solution (Carroll, 1982). Thomson acknowledged that Spearman’s derivation of a general factor was essentially correct mathematically. However, he felt it did not derive from an overarching causal g factor, but rather from the overlap of bonds, and the laws of probability and sampling (Brody, 1992).

Thorndike (1925; Thorndike, Bregman, Cobb, & Woodyard, 1927) also saw the mind as composed of a large number of bonds; namely, stimulus-response bonds. Again, tasks were correlated to the extent that they called upon the same stimulus-response bonds. For Thorndike, intelligence had both a genetic and an experiential basis: the number of bonds in an individual’s mind reflected both the individual’s ability to form bonds, and his or her experiences in the world that led the individual to link stimuli with responses (Carroll, 1982).

Thorndike’s bond theory was also consistent with his “identical elements” view of positive transfer (Singley & Anderson, 1989; Thorndike, 1906; Thorndike and Woodworth, 1901). Training in one task would result in positive transfer to another task, insofar as the two tasks shared stimulus-response bonds (i.e., the identical elements). Practice alone, as suggested by the doctrine of formal discipline (Angell, 1908; Pillsbury, 1908; Woodrow 1927), was not enough. Practice must be focused on stimulus-response connections shared by the target task.

According to the bond theorists, Spearman’s g factor was an index of the total number of bonds that an individual possessed. While Spearman’s two factor theory and the bond theory produced very similar mathematical predictions, they painted very different pictures of the mind. Spearman’s g presented an orderly view, with the mind dominated by a single factor. Bond theory presented an anarchistic view (Brody, 1992), with millions of bonds all having some small influence in an individual’s mental ability.

L. L. Thurstone was a psychologist and psychometrician at the University of Chicago who disagreed with Spearman about the existence of a single, overarching g factor (Gardner & Clark, 1992). In Thurstone’s view, the mind was dominated by several “group” factors: factors responsible for certain aspects of mental activity (e.g., verbal ability or numeric ability). Once these factors were considered, there would be no need to postulate a “general” factor.

Spearman had pioneered the development of factor analysis, at least as it applied to the field of human abilities. To determine the g factor from a correlation matrix of cognitive ability tests, Spearman would extract a single, unrotated factor or component. This factor represented that which was common to all the tests in the battery. Thurstone (1931; Carroll, 1982), however, extended Spearman’s factor analytic methods to include the possibility of multiple factors. Thurstone showed that a correlation matrix might require several factors to adequately account for the correlations present.

Armed with the methods of multiple factor analysis, Thurstone (1938) collected data from 240 students who completed 56 ability tests (Brody, 1992). These were factor analyzed, and seven to nine factors, or “primary mental abilities,” were identified. The primary mental abilities were (Cronbach, 1970): (1) V, or verbal (e.g., vocabulary); (2) N, or number (e.g., arithmetic reasoning); (3) S, or spatial (e.g., paper folding); (4) M, or memory (e.g., digit span); (5) R, or reasoning (e.g., number series); (6) W, or word fluency (e.g., rapid word finding); and (7) P, or perceptual speed (e.g., comparing symbols quickly to detect differences). Occasionally, reasoning is split into deduction (D) and induction (I) (Cronbach, 1970), and arithmetic is split into numerical (N) and arithmetic reasoning (R) (Brody & Brody, 1976).

When the correlations among a set of ability tests are factor analyzed, the factors that result are not always readily interpretable. What emerges are a set of factors, and the loading of each test on each factor. The loadings indicate the relationship of the tests to the factors. When trying to interpret a factor, one looks at which tests load on it, and uses the nature of these tests to name the factor. But if all tests load on all factors, the interpretation process is hopeless.

This is the situation Thurstone (1938) found himself in. However, he proposed a solution to the problem. The factors that emerge from factor analysis are like axes of longitude and latitude. They are not fixed, but may be rotated around their center point. Certain orientations of the factors may be more interpretable than others. To find an interpretable orientation, Thurstone (1935, 1947; Gorsuch 1983, pp.178–179) proposed the criterion of “simple structure.” Basically, simple structure seeks a solution in which tests either load high or near zero on a factor. Furthermore, the loadings should be distributed in such a way that all factors have some high loadings, and all factors have many zero loadings. This outcome leads to factors that are strongly associated with a few, and only a few, tests. The factors are then labeled according to what is common to those tests. Thus, Thurstone (1938) rotated his factor analysis solution to simple structure, and discovered the primary mental abilities.

Initially it appeared that Thurstone had disproven the notion of a single, overarching g factor and replaced it with a set of group factors. This led to a heated debate between Spearman (who claimed group factors did not exist) and Thurstone (who claimed g did not exist) (Brody & Brody, 1976). However, in later studies Thurstone (Thurstone & Thurstone, 1941) realized that in order to achieve simple structure, he would have to allow his primary mental abilities to correlate with each other (he had resisted this in 1938). He rationalized that many conceptually distinct entities (e.g., height and weight) were correlated with each other in the real world. Unfortunately, this led to the possibility that one could factor analyze the correlations among the primary mental abilities themselves, and this would lead to a general factor at the secondary level.

In the end, neither Spearman nor Thurstone succeeded in their original positions. Thurstone had demonstrated that an adequate sample of ability tests administered to a representative sample of individuals would require group factors to account for the high degree of correlation among them. But to achieve simple structure, these group factors would need to be correlated, allowing for a general factor above the group factors. How could these disagreements be resolved? We take up this question in the next section.

By the end of the 1940s, it became clear that neither Spearman’s two factor theory nor Thurstone’s group factor approach could adequately describe the correlations among cognitive ability tests. Psychologists in the United Kingdom (Burt, 1940; P. E. Vernon, 1950) proposed combining Spearman’s and Thurstone’s approaches into a single hierarchical description of human abilities. A typical example is given by Vernon (1950, p. 22). At the top of the hierarchy of abilities is a general factor (essentially Spearman’s g) accounting for approximately 40% of the variance in ability tests. Below g are two major group factors: (1) v:ed or verbal/educational ability, and (2) k:m or spatial/mechanical ability. Under each of these major group factors are a number of minor group factors, which may emerge if there is a sufficient diversity of tests in the test battery. For instance, Vernon (1950, p. 23) states that the k:m factor may split into mechanical information, spatial, and manual subfactors. Under the minor group factor are specific factors (and error factors) related only to single tests. Thus, the hierarchical approach captures g and s factors at its top and bottom, and group factors in its middle. Exactly where Thurstone’s abilities would lie depends greatly upon the specific tests being factor analyzed.

The British hierarchical model can be considered a “modal model” that combines and integrates many of the findings on human abilities that were discovered during the first half of the twentieth century. The British hierarchical model was wonderfully descriptive, but it did not lead to new avenues of research on human intelligence.

The theory of fluid and crystallized ability was originally developed by Raymond B. Cattell (1941, 1963, 1971), and later investigated more fully with his collaborator, John L. Horn (Horn; 1968, 1985; Horn & Cattell, 1966, 1967). Cattell and Horn’s analysis begins by analyzing a set of ability tests into a group of correlated first order factors, similar to Thurstone’s primary mental abilities. However, the number of first order factors extracted is somewhat larger, on the order of 30 (e.g., Horn & Cattell, 1966) to 40 (cited in Horn & Hofner, 1982). These correlated first order factors are then factor analyzed to produce a correlated second order factor solution. This solution has yielded between five (e.g., Horn & Cattell, 1966) and nine (e.g., Horn & Hofner, 1982) second order factors, the most interesting of which are gf, or fluid intelligence, and gc, or crystallized intelligence. The other second order factors are: gq (quantitative knowledge), gsm (short-term apprehension), glr (fluency of retrieval from long-term storage), gv (visual processing), ga (auditory processing), gs (processing speed), and cds (correct decision speed).

Fluid intelligence is related to tasks such as inductive reasoning, deductive reasoning, understanding relations among stimuli, comprehending implications, and drawing inferences (Horn & Hofner, 1982). Horn & Cattell (1966) have associated fluid intelligence with the basic biological capacity to learn. Crystallized intelligence, on the other hand, is related to tasks such as vocabulary and cultural knowledge. It is related to experience in a culture, and exposure to formal schooling; that is, the knowledge acquired through experience with one’s environment. Fluid and crystallized intelligence are not independent of one another; they are correlated approximately 0.4 to 0.5 (Brody & Brody, 1976). The distinction between fluid and crystallized intelligence is similar to the distinction made by Hebb (1942; see Brody & Brody, 1976, p. 32) between intelligence A (native ability) and intelligence B (realized potential).

The distinction between fluid and crystallized intelligence is more that just an academic distinction. Fluid intelligence is susceptible to decline due to central nervous system damage, while crystallized intelligence remains relatively intact after such damage (Horn & Hofner, 1982). Furthermore, the two intelligences display different patterns of growth and decline over the lifespan. Fluid intelligence peaks in the early to mid-20s and declines thereafter; crystallized ability peaks much later (in the early 40s), and in many cases remains high even into late adulthood (Horn & Hofner, 1982).

Horn (Horn & Hofner, 1982) clearly believes that each of the second order factors in his analysis represents a different form of intelligence. However, one must remember that these various “intelligences” are correlated, which leaves open the possibility of factoring the second order factors. This procedure could easily lead to a general factor at the top of the hierarchy, a result that would bring the fluid/crystallized intelligence theory more in line with the British hierarchical theory (for instance, see Carroll’s (1993) “three stratum” model in the section below).

The three-stratum factor analytic theory of cognitive abilities was proposed by John B. Carroll (1993; 1997; see Sattler, 2008, for a summary). This model is similar to the British hierarchical model in that it is hierarchical; however, it proposes only three levels of hierarchy. At the top of the hierarchy is g. At the middle level of the hierarchy are eight broad group factors (Sattler, 2008): (1) fluid intelligence; (2) crystallized intelligence; (3) general memory and learning; (4) broad visual perception; (5) broad auditory perception; (6) broad retrieval ability; (7) broad cognitive speediness; and (8) processing speed. At the bottom of the hierarchy are 65 narrow abilities. The model bears similarity to Thurstone’s theory of primary mental abilities (in that it contains broad group factors) and Horn and Cattell’s theory of fluid and crystallized ability (in that it explicitly includes these as group factors).

Carroll was known as a consummate scholar. His model is based on reanalysis of a huge number of factor analytic studies in the literature. Whether one prefers his model or the British hierarchical model is probably a matter of taste. The British model can accommodate factors that shift in their importance according to the particular set of tests used in a study. Carroll’s model is probably more descriptive of the true state of affairs when the sampling of tests and subjects is wide enough to approximate the universe of cognitive tests and the universe of normal subjects.

The structure of intellect theory of human abilities was proposed by American psychometrician J. P. Guilford (1964; 1967; Guilford & Hoepfner, 1971). The model proposes that human abilities can be defined as the combination of one of five mental operations (cognition, memory, divergent production, convergent production, or evaluation) operating on one of four types of contents (figural, symbolic, semantic, or behavioral) to produce one of six kinds of products (units, classes, relations, systems, transformations, or implications). Thus, the model proposes 120 separate human abilities. The structure of intellect model is often presented visually in introductory psychology textbooks (e.g., Hilgard, Atkinson, & Atkinson, 1971, p. 370) as a cube, with each of the three dimensions of the cube representing mental operations, contents, and products.

Guilford (1977) later subdivided the figural category into auditory and visual categories (Brody, 1992), thereby increasing the total number of unique abilities to 150. Guilford spent a great deal of his career attempting to develop tests to assess each of the separate abilities implied by the theory. The major problem with the structure of intellect model is that it ignores the fact that virtually all cognitive abilities are positively intercorrelated: a finding known as the “positive manifold.” Brody & Brody (1976) reexamined some of Guilford’s own data and found evidence for the positive manifold, despite the fact that the structure of intellect model proposes that it does not exist.

By the end of his career, Guilford (1982) acknowledged that the abilities proposed by the model were indeed correlated. He modified the structure of intellect model by proposing a new, hierarchical structure. At the first level of the hierarchy were the 150 abilities defined by the crossing of an operation, content, and product. At a second level of the hierarchy, he proposed 85 factors defined by pairs of abilities that shared one dimension (an operation, content, or product) but differed with regard to the other two. Finally, at the third level of the hierarchy, he proposed 16 factors defined by a single ability (an operation, content, or product) that shared the other two dimensions.

Guilford’s theory of intelligence was a bold departure from those that preceded him. While his later modifications brought his model into better alignment with the data, it is the cube of 120 to 150 independent abilities that he will be most remembered for. Unfortunately, that incarnation of the structure of intellect did not account for the positive intercorrelations among cognitive abilities. The tests derived from it were often quite narrow, with little predictive validity beyond the test itself (Brody & Brody, 1976).

The psychometric approach to intelligence has been based on discovering underlying (i.e., latent) dimensions of communality by inspecting individual differences in test performance. Early debates concerned the importance of general intelligence as compared to group factors such as verbal ability or number ability. A rapprochement was found in a hierarchical model, with general intelligence at the top of the hierarchy, followed by major group factors at the next level, then minor group factors, and finally specific factors. Proposals such as the structure of intellect model, which hypothesized over 100 separate, independent abilities, have been shown to be inconsistent with existing correlational data.

Other proposals, such as the distinction between fluid and crystallized intelligence, have focused attention on important distinctions within intelligence. The distinction between fluid ability and crystallized ability can be thought of as the distinction between ability and achievement (or “realized potential”). These different aspects of intelligence show different developmental courses, and differing susceptibility to brain injury. Thus, while they may both represent intelligence, in many contexts it makes sense to distinguish between them so as not to confuse issues.

While psychometric theories focus on the structure of human intelligence, cognitive theories have focused on the processes involved in human intelligence (Sternberg, 1985a). The cognitive processes investigated have spanned a continuum from extremely simple to reasonably complex. In my presentation, I will follow this continuum, beginning with research involving simple cognitive processes and concluding with research involving complex processes. Given the vast array of cognitive processes involved in intelligent behavior, there has been very little consensus on exactly which processes should be the center of attention in explorations designed to find the seat of intelligence.

Advocates of simple sensory testing believed that more intelligent individuals were better able to make fine sensory discriminations, such as the discrimination of pitches, shades of gray, or weights of similar amount. This was apparent in the early work of Spearman (1904b). Others believed that more intelligent individuals would have superior physical stature and abilities (Terman, 1925).

Early interest in the simple sensory testing approach sprung from the work of Francis Galton. Galton was the cousin of famed English biologist Charles Darwin, whose book On the Origin of Species (Darwin, 1859) proposed the theory of evolution. Darwin had shown the importance of individual differences to environmental adaptation and reproduction. Galton’s interest focused on individual differences in “natural ability” (Simonton, 2003). Galton (1869) felt that those with greater natural ability (roughly equated to a combination of intelligence and motivation [Simonton, 2003]) would be eminent in their fields of study, while those with lesser natural ability would fail to prosper.

Galton also made an important contribution to the notion of how intelligence is distributed in the population. He was intrigued by the work of the statistician Adolphe Quételet who had applied the normal distribution to human physical characteristics (Brody & Brody, 1976; Simonton, 2003). Galton extended this idea by applying the normal distribution to intelligence. To this day, tests of intelligence are assumed to be normally distributed (e.g., Roid, 2003).

Galton was a strong believer in the role of genetics in intelligence. He believed that intelligence was genetically transmitted by parents to children (Galton, 1874), although he allowed that environmental factors could play some role (Simonton, 2003). Galton also believed that races differed in their intelligence, and that eugenics, or planned breeding, could be employed to improve the “intelligence” of a nation.

With regard to simple sensory testing, Galton (1883; see Brody & Brody, 1976) set up the “anthropometric laboratory” in the South Kensington Museum in London. He collected data on 17 variables (things such as strength and sensory acuity) from 9337 individuals. Unfortunately, he did not relate these to any criterion of intelligence, so his grand experiment ended without any proof of a conclusive relationship between simple sensory measures and intellect.

In the United States, Galton’s ideas were championed by James McKeen Cattell. Indeed, it was Cattell (1890; see also, Brody & Brody, 1976) who first used the term “mental test” in describing simple sensory tests of the Galtonian type. Cattell had been a student of Wilhelm Wundt at Leipzig, and had had personal contact with Galton in Europe. While at Columbia University, he began a program of taking simple measurements from students in each year’s freshman class (Cattell & Farrand, 1896). There were 21 measurements in all, and they included things such as strength of hand, visual acuity, cancellation of “A’s”, and reaction time. Cattell’s graduate student, Clark Wissler, performed the validation study relating the simple sensory measures to students’ grades at Columbia. Wissler’s (1901) results were not very encouraging. He found that students’ grades tended to correlate with each other, but not with the simple sensory test data that had been collected.

To a large degree, Wissler’s validation study ended the early interest in simple sensory measures as an index of intelligence. Many years passed before interest in the relationship between simple measures and intelligence reoccurred. In the 1970s, however, several psychologists began to explore the relationship between cognitive measures (some simple, some complex) and intelligence. I begin by looking at one of the simplest measures: inspection time.

In the inspection time paradigm (see Nettelbeck, 2003, for a complete description and review), participants are briefly shown two vertical lines, whose tops are connected by a horizontal line. The exposure is followed by a “backward mask” that overlays the original figure of the vertical lines. This prevents participants from further processing the original picture of the two vertical lines. The amount of time between the presentation of the picture of the two vertical lines and the presentation of the backwards mask is called the stimulus onset asynchrony (SOA). SOAs are varied over trials until, for each participant, the SOA is found with which the participant can respond correctly on 75% of the trials.

If processing time were not restricted, the procedure would result in virtually no errors, as one of the two vertical lines is clearly longer than the other. However, individuals differ with respect to this SOA parameter, and it is the relationship between participants’ SOAs and intelligence that has been investigated. For simplicity’s sake, I will simply refer to the SOA parameter as a participant’s inspection time (IT). Nettelbeck & Lally (1976) were the first to report a relationship between IT and intelligence (defined as the performance IQ score from the Wechsler Adult Intelligence Scale): r = –0.9. This is an extremely strong relationship, but the Nettelbeck & Lally (1976) study used a very small sample (N = 10) with a very wide range of range of intelligence (72 points). This almost certainly exaggerated the size of the relationship, but it certainly piqued interest in IT as a measure of intelligence.

Several psychologists have attempted to estimate the true size of the relationship between IT and intelligence. Nettelbeck (1987), based on 16 studies, concluded that the size of the correlation was r = –0.35. These studies included only participants without mental retardation, since including the mentally retarded tends to inflate the size of the correlation. If the correlation is corrected for restriction of range, the estimate size of the relationship rises to r = –0.50.

Kranzler & Jensen (1989) performed a meta-analysis of 31 studies containing more than 1,100 participants (again, without mental retardation). They found a correlation between IT and intelligence of r = –0.29. With correction for restriction of range, the correlation rises to r = –0.49. Similar results were found in a meta-analysis by Grudnik & Kranzler (2001). In a sample of 92 studies with approximately 4,200 participants, they found a correlation between IT and intelligence of r = –0.30. Corrected for restriction of range, the correlation rises to r = –0.51.

The previous findings establish a relationship between IT and intelligence, but why does such a relationship exist? In particular, what is it that IT indexes? Several suggestions have been made, but there is little overall agreement among researchers. The earliest researchers in the field (Vickers, Nettelbeck, & Willson, 1972) felt IT reflected the rate at which the visual system could sample proximal stimulation, and therefore set a limit on speed of information processing. Later researchers (White, 1996) pointed out that the entire SOA function (essentially a psychophysical function) would include two stages: (1) an initial lag stage during which performance was essentially at chance levels, and (2) the SOA function portion, during which probability of correct decision improved with increased SOA. It was argued that the individual differences in the lag stage represented differences in focused attention or vigilance, while individual differences in the SOA portion represented the capacity to detect change in a briefly exposed visual array (Nettelbeck, 2003). Jensen (1998) referred to this second stage as “stimulus apprehension” or speed of perception.

What is clear is that IT, which was originally thought to be a simple task, indexing a simple information processing ability, is more complex than was initially thought. Further complicating the picture is the possibility that IT may reflect different cognitive processes for subjects with different levels of intellectual ability (e.g., retarded vs. normal) (Lally & Nettelbeck, 1980; Nettelbeck & Kirby, 1983) and for subjects of different ages (Kranzler & Jensen, 1989). While a relationship between IT and intelligence certainly does exist, it would be difficult to conclude that we can “explain” intelligence (or some part thereof ) on the basis of IT.

Arthur Jensen has championed a slightly more complex paradigm for investigating the relationship between elementary cognitive operations and intelligence: simple/choice reaction time. In this task, a subject places his or her finger on a “home” button. A number of unlit lights are presented to the subject at an equal distance from the home button. In the simple reaction time condition, a single unlit light is presented. In the choice reaction time conditions, either two or four or six or eight unlit lights are presented. The subject’s task is to watch for one of the lights to become lit. When this happens, the subject is to remove his or her finger from the home button and press a switch just below the light to turn it off. Subjects begin with approximately 30 trials in the simple reaction time condition. They then progress through approximately 30 trials in each of the choice reaction time conditions, beginning with the two-light condition and ending with the eight-light condition (e.g., Jensen & Munro, 1979; Hemmelgarn & Kehle, 1984). Time to response in this paradigm can be separated into response time (RT), the time between the light’s illumination and the subject removing their finger from the home button, and movement time (MT), the time between the subject’s finger leaving the home button and pressing the button to turn off the light. Theoretically, RT should represent decision time, while MT should represent the execution of the intended response.

Jensen (1982; see also Dreary, 2003) reminded the psychological community of a relationship between speed of response in the simple/choice reaction time task, and number of response alternatives known as Hick’s law (Hick, 1952; Hyman, 1953): namely, that response time increases linearly as a function of the logarithm to the base two of the number of choice alternatives. This logarithm represents the number of bits of information in the stimulus display that the subject must deal with. The interesting finding with regard to intelligence is that the size of this slope of RT over bits of information is correlated with intelligence (first demonstrated by Roth [1964] and reported by Eysenck [1967]). More intelligent individuals have flatter slopes; they can deal with more information per unit of time.

Jensen and Munro (1979) replicated Roth’s (1964) finding in a study with 39 teenage girls. The correlation between the RT slope and intelligence (measured by the Raven’s Standard Progressive Matrices) was r = –0.30. Other measures also showed a relationship to intelligence: (1) the mean RT and intelligence, r = –0.39; (2) the standard deviation of RT and intelligence, r = –0.31; and (3) the mean MT and intelligence, r = –0.43. However, the standard deviation of MT did not show a substantial relationship to intelligence: r = 0.07.

Jensen continued research into the simple/choice reaction time paradigm and its relationship to intelligence over the following years. In a review chapter ( Jensen, 1987; see also a summary in Dreary, 2003) he summarized the results of numerous studies with a total sample size of 2,317. He calculated the N-weighted correlations between the parameters from the paradigm, and the studies’ various measures of intelligence. The results were: (a) RT slope, r = –0.12 (corrected for unreliability, r = –0.32); (b) RT intercept, r = –0.12 (corrected, r = –0.25); (c) mean RT, r = –0.20 (corrected, r = –0.32); (d) standard deviation of RT, r = –0.21 (corrected, r = –0.48); (e) mean MT, r = –0.19 (corrected, r = –0.30); and (f) standard deviation of MT, r = –0.01 (corrected, r = –0.02). Note that the individual parameters are based upon different total numbers of subjects (depending upon the designs of the individual studies), and thus the correction for unreliability may be different for different parameters.

What intrigued Jensen about the simple/choice reaction time task were its simplicity, and the fact that prior learning seemed to play no role in performance. In many ways it seemed like the ultimate “culture fair” measure of intelligence. However, several problems plagued research in this area. First, the slope of RT was the parameter theoretically related to intelligence; however, many of the other parameters correlated as high or higher with intelligence. Second, the mean MT correlated with intelligence, but this supposedly reflected only response execution. Finally, there was the issue of reliability, both split-half within a session, and test-retest over sessions. The various parameters demonstrated good split-half reliability (all above 0.66, with a median of 0.84), but the results were far more variable for test-retest reliability (range: 0.39 to 0.84, with a median of 0.63). Indeed, it was the RT slope that had the lowest test-retest reliability, of 0.39! Given that we think of intelligence as a stable aspect of human performance, it seemed hard to understand how an unstable parameter like RT slope could be used to explain intelligence.

Jensen had a biological theory to explain why RT slope was related to intelligence. I will leave this theory to the section on biological theories. However, suffice it to say that the theory was a distant extrapolation from the data, for which there was no direct biological support.

As with IT, the parameters derived from simple/choice reaction time paradigms show a significant relationship with psychometrically defined intelligence. The relationships are modest, but appear to show up reliably. However, the parameters from the task appear to be poorly understood from a cognitive processing perspective. MT mean correlates with intelligence, though theory predicts it shouldn’t. Simple RT and MT means seem to correlate better with intelligence than the RT slope, though the RT slope is the theoretically most interesting parameter. If this doesn’t shake one’s confidence in the RT slope, consider the following: the RT slope of pigeons in an animal analog study (Vickery & Neuringer, 2000) would suggest they are more intelligent than humans! Apparently, we don’t understand the simple/choice reaction time task and what it measures well enough to use it as an explanatory construct for intelligence.

Baddeley (1986; Baddeley & Hitch, 1974) introduced the concept of “working memory” to the cognitive psychology literature. Previously (e.g., Atkinson & Shiffrin, 1968), the temporary storage of information was assumed to occur in a short-term store (or short-term memory). Storage of information was emphasized, while the processing of information was downplayed (with the possible exception of rehearsing information in short-term store to keep it “active”). Baddeley’s proposal of a working memory emphasized both the storage of information and the processing or transformation of information being stored. According to Baddeley and Hitch (1974), working memory comprised a limited-capacity central executive that controlled two slave subsystems: the articulatory loop (for verbal material) and the visuospatial sketch pad (for visual and spatial material). The storage and processing aspects of working memory were proposed to be essential for learning, retrieval from long-term memory, language comprehension, and reasoning.

Kyllonen and Christal (1990) seized upon the importance of working memory for reasoning, and conducted a series of four experiments to determine the relationship between the two. The subjects in these studies were over 2100 U.S. Air Force recruits. In each experiment, the subjects completed a combination of paper-and-pencil tests and computerized tests, with the tests being somewhat different across the four experiments. However, the tests were selected to define the following four factors: reasoning (a combination of deductive, inductive, and quantitative reasoning, strongly related to fluid intelligence); general knowledge (a sort of crystallized intelligence measure); processing speed (a measure of the speed of performing simple perceptual/motor operations or the retrieval from memory of simple facts); and, of course, working memory. Data were analyzed using confirmatory maximum-likelihood factor analysis and structural equation modeling.

Kyllonen and Christal’s (1990) finds were extraordinary. The correlations between the reasoning factor and the working memory factor in the four experiments were: 0.82, 0.88, 0.80, and 0.82! The title of their article neatly summarized their conclusion: reasoning ability is (little more than) working-memory capacity?!

Clearly, working memory capacity plays a central role in the information processing required to do well on tests of reasoning or intelligence, and this is what Kyllonen and Christal suggest. However, they note that the arrow of causation is not clear in correlational research such as their own. It could also be that those high in reasoning or intelligence are better able to manage their limited working memory capacity. On this account, people high in reasoning ability have a more efficient central executive in their working memory system, perhaps due to their high reasoning ability. Either way, the relationship between working memory and reasoning/intelligence is very strong, and further research will be needed to flesh it out.

In the cognitive correlates approach (Sternberg, 1985b) to understanding intelligence and human ability, participants are tested using cognitive psychology experimental paradigms that are considered informative concerning basic information processes. Individual subjects’ data are then modeled, processing parameters are derived, and these parameters are related to psychometrically defined abilities. Often, extreme ability groups (e.g., top quartile vs. bottom quartile) are used in an effort to more easily determine which parameters play roles in which psychometric abilities.

The foremost proponent of this approach has been Earl Hunt (1978; Hunt, Frost, & Lunneborg, 1973; Hunt, Lunneborg, & Lewis, 1975). He (Hunt, Frost, & Lunneborg, 1973) has tested undergraduate students at the University of Washington on a wide range of cognitive psychology experimental paradigms (e.g., Posner’s [Posner & Mitchell, 1967] letter match/name match procedure; Wickens’ [1970] release from proactive inhibition procedure; the Brown-Peterson [Peterson & Peterson, 1959] short-term memory procedure; Atkinson and Shiffrin’s [1968, 1971] continuous paired-associates procedure; and others). These students were chosen so that they came from either the top or bottom quartile of a college entrance test for verbal ability, and either the top or bottom quartile of a college entrance test for quantitative ability. The crossing of these two dimensions produced four groups of students: (1) high verbal, high quantitative (N = 30); (2) high verbal, low quantitative (N = 25); (3) high quantitative, low verbal (N = 26); and low verbal, low quantitative (N = 23). Not all students participated in all experimental paradigms, but this total pool was constant throughout the many individual experiments.

Without digressing into the particular parameters estimated, and their correlations with verbal and quantitative ability, Hunt did come away with a set of generalizations from his studies. First, verbal ability appeared to be related to the rapidity of processes in short-term memory. Second, quantitative ability appeared to be related to resistance to interference in memory. These generalizations are based upon multiple findings over several experiments. Hunt expressed the hope that these findings could pave the way for a rapprochement between psychometric psychology and cognitive psychology, something called for by Cronbach (1957) decades earlier.

Unfortunately, the cognitive correlates approach has not progressed very far since Hunt’s ambitious start. One reason is that testing large numbers of participants in cognitive paradigms is very costly and time consuming; yet, these large numbers are necessary to discover the correlational relationships between cognitive parameters and psychometrically defined abilities. A second reason that the cognitive correlates approach has stalled is that it is not particularly theoretical. While understanding psychometric abilities in terms of cognitive processes is a laudable goal, the cognitive processes chosen by Hunt and his colleagues were based primarily on the popularity of cognitive paradigms in the published literature. This popularity did not guarantee that the processes studied would be related to psychometrically defined abilities. The next approach I consider worked in much the opposite way: items from psychometric tests were analyzed to discover the underlying information processes.

During the late 1970s and early 1980s, Robert J. Sternberg (1977a; 1977b; Sternberg & Gardner, 1982, 1983; but see also Mulholland, Pellegrino, & Glaser, 1980; Whitely, 1980) pioneered the “cognitive components” approach to understanding intelligence. Sternberg (1977a, 1977b) investigated a task typically found on tests of intelligence: the analogy. Participants in his initial studies solved analogies while being timed. Error data were also collected. Sternberg used linear regression to decompose the total time necessary to solve an analogy into the time necessary to perform a set of information processes hypothesized to underlie analogy solution. The individual differences in the times necessary to execute the information processes were then related to paper-and-pencil tests of reasoning and perceptual speed. Error data were similarly modeled.

Sternberg’s model of analogical reasoning consisted of seven information processing components: encoding, inference, mapping, application, justification, comparison, and response. Encoding set up an initial mental representation of the analogy. Inference discovered the relationship between the A term and the B term. Mapping discovered the relationship between the A term and the C term. Application applied the A to B relation to the C term, to discover an ideal answer (D*). In multiple-choice analogies, comparison measured the process of discriminating the two answer options (DT and DF) from each other. The smaller the difference between DT and DF, the longer the comparison time required. Also, in multiple-choice analogies, justification involved justifying the better answer (DT) as correct, even though it was not the “perfect” answer. Justification time increased as D* to DT differences increased. Finally, response execution was measured by the regression constant. Not every analogy study modeled every component, due to differences in the analogies themselves (e.g., true-false vs. multiple-choice).

Sternberg (e.g., 1977a, p. 242) found substantial correlations (–0.54 to –0.56) between latencies for his basic reasoning components (inference and mapping) and a reasoning factor derived from his paper-and-pencil tests of reasoning. These component latencies, however, were not correlated with a perceptual speed factor, which indicates that the component latencies were not measuring simple speed of responding (consistent with Sternberg’s hypotheses). Two other interesting correlational findings emerged from this research. Encoding was significantly correlated with reasoning, but in a positive direction (e.g., r = 0.63, Sternberg, 1977a, p. 242). This means that more intelligent individuals would spend more time on encoding, but less time on inference and application. Also, the response component was strongly correlated (negatively) with reasoning (e.g., r = –0.77, Sternberg, 1977a, p. 242). Because response was modeled using the regression constant, any constant processes (such as “planning”) would be confounded with response. Sternberg (1982) interpreted these two findings as reflecting metacognitive processes he termed “metacomponents.”

Metacomponents were information processing components, but they acted upon other components (rather than stimulus information) and governed things such as strategy selection and speed/accuracy trade-off. They were not directly estimated in much of Sternberg’s early work, but their presence was suggested by the patterns of correlation of regular components (termed “performance components”) and reasoning, just as the presence of a planet orbiting around a star might be suggested by the gravitational effects of the planet upon the star. Sternberg believed that much of what we termed “intelligence” could be accounted for by metacomponents.

Sternberg and Gardner (1983) extended Sternberg’s (1977a; 1977b) earlier findings by developing information processing (i.e., performance component) models of two other tasks typically found on tests of intelligence: series completions, and classifications. These models included many of the same information processes found in the analogical reasoning model. Sternberg and Gardner demonstrated that information processes that were purportedly the same in different task environments (i.e., analogies, series completions, and classifications) would correlate more highly with each other than they would with processes that were purportedly different in different task environments. Thus, they provided evidence that the same information processes could underlie intelligent behavior across a range of tasks. Sternberg and Gardner (1983) also replicated Sternberg’s (1977a; 1977b) findings of substantial correlations between component latencies and psychometrically defined reasoning, but not with perceptual speed.

The performance components approach offered a fairly direct link between performance on intelligence test items, and performance on cognitively-based information processes. Criticisms of this approach were based on its lack of generality. While one could explain psychometric performance in terms of the cognitive processes necessary to solve the individual items, how did these processes relate to other aspects of intelligent behavior in the real world—that is, beyond the testing environment? Questions such as these led Sternberg to develop his “triarchic theory of intelligence,” which I will consider later in this chapter.

Earlier I noted that fluid and crystallized intelligence display different developmental courses throughout adulthood. Fluid ability increases through adolescence, peaks in the early to mid-20s, and declines thereafter. In contrast, crystallized ability increases until the early 40s, and often remains high late into adulthood. Given that crystallized ability represents the products of education and acculturation, its slow growth may be understood as the gradual accumulation of knowledge. But what can explain the decline in fluid ability that occurs through middle age and later life?

Timothy Salthouse (1985, 1993, 1996) has proposed that the decline in fluid ability, and similar intellectual functions, is the result of a slowing of processing speed for cognitive processes with aging. He relates this slowing to two basic mechanisms of impaired performance: (1) the limited time mechanism, and (2) the simultaneity mechanism. According to the limited time mechanism, the time to perform later operations is greatly restricted when a large proportion of the available time is occupied by the execution of early operations. The simultaneity mechanism states that, due to slowing, the products of early processing may be lost by the time that later processing is completed. The result of both these mechanisms is a reduction in performance, not only in speed, but also in accuracy, as individuals age.

Salthouse (1996) found that nearly 75% of age-related variance in many cognitive measures is shared with measures of cognitive speed. This is strong support for the theory. Salthouse does not necessarily endorse that the reduction in speed with aging is due to a single factor (such as general slowing of nerve conduction); he believes there could be several common speed factors at work. He also believes that as individuals age, they adapt their strategies on cognitive tasks to try to compensate for the negative effects of slower processing. These strategic choices can mask the effects aging in everyday tasks.

While Salthouse’s theory does not explain intelligence per se, it does offer an explanation for the decline in fluid ability during middle to later life. It also points out an important factor to consider when designing education and training for older adults: namely, speed of processing. When presenting information to older audiences, one must make accommodations for the slower speed with which stimuli can be encoded and analyzed, and for the possibility that older individuals may not have simultaneous access to as many different pieces of information as younger learners would.

Cognitive theories of intelligence have attempted to understand intelligence in terms of the cognitive processes that underlie it. In this sense, cognitive theories are analytic, attempting to break intelligence into its most basic components. The theories have differed greatly in positing just what those underlying components are. For Francis Galton and James McKeen Cattell, the components were elementary sensory and motor processes. Similarly, for IT researchers such as Nettelbeck, the components were some aspects of visual (and therefore sensory) apprehension (though, as I have said, the processes involved in IT may be more complex than the earliest IT researchers thought). For Arthur Jensen, the components involved the ability to process increasingly greater amounts (i.e., bits) of information and make a simple decision (i.e., which light to turn off). For Kyllonen and Christal, the components were those processes involved in working memory. For Earl Hunt and colleagues, the components were the processes involved in STM (for verbal ability) and the ability to resist interference in memory (for quantitative ability). For Sternberg and colleagues, the components were the information processes involved in solving intelligence test items such as analogies, series completions, and classifications.

Up to this point, I haven’t mentioned the researchers who developed many of the actual assessment instruments in use today (or, at least, their historical forerunners). Binet’s (Binet & Henri, 1896) initial conception of intelligence was based upon the psychology of faculties (Brody, 1992; Brody & Brody, 1976). This school of thought viewed the mind as composed of numerous independent abilities. The actual list of abilities, or faculties, came from philosophers such as Christian von Wolff, Thomas Reid, and Dugald Stewart. Binet believed that intelligence was based upon numerous quasi-independent abilities such imagination, attention, and comprehension. He also believed that a test of intelligence should sample these higher order cognitive processes (Brody, 1992; Brody & Brody, 1976; Sternberg & Jarvin, 2003).

By the time that Binet (Binet & Simon, 1905) actually developed his Metric Scale of Intelligence, he had moved away from using a theoretical model, and instead based his item and test selection on predictive validity: which items could differentiate retarded from nonretarded children. In his 1908 revision of the test (Binet & Simon, 1908), he extended the test so that it was able to differentiate among normal children, and the concept of mental age was introduced (Brody & Brody, 1976; Sternberg & Jarvin, 2003). A third revision of the test was produced in 1911 (Binet, 1911).

Binet did not actually introduce the notion of an intelligence quotient, or IQ (Brody, 1992). This innovation was produced by Stern (1912), who proposed dividing an individual’s mental age by their chronological age, and multiplying by 100. This advancement allowed children of different ages to be compared in terms of their relative intellectual performances.

Binet’s cognitive theory of intelligence, to the extent it existed, was tied to complex information processing. In a sense, Binet sought an average level of an individual’s complex cognitive processing (Tuddenham, 1962, cited in Brody & Brody, 1976). David Wechsler, developer of the various Wechsler scales of intelligence (Wechsler, 1939; 1949; 1967), also felt that intelligence was a multifaceted construct tapping many complex processes (Wechsler, 1944). Not surprisingly, the tests that bear his name involve a number of cognitively complex subtests (e.g., vocabulary and object assembly).

My overall conclusion is that everyone recognizes that intelligence is based upon cognitive processing. However, researchers and theorists have differed with regard to the complexity of the cognitive processes that underlie intelligence. Successful assessment instruments have tended to emphasize the complex end of the continuum. Researchers from the field of cognitive psychology have to emphasize simpler processes (e.g., Nettelbeck, Jensen, and Hunt), though not exclusively (e.g., Sternberg).

Psychometric theories emphasize the structure of intelligence, while cognitive theories emphasize the processes involved in intelligence. In the next section, I discuss cognitive-contextual theories of intelligence, which emphasize the context in which intelligence in displayed.

Cognitive-contextual theories attempt to explain intelligent behavior in terms of the context in which it is displayed. While cognitive processing is certainly important within these theories, they tend to take a more biological approach: development of certain skills is determined (and encouraged) by fit to the environment or culture. Thus, there may be no such thing as universal intelligence; only intelligence or intelligences that are environmentally or culturally relevant. These theories also take a more “big picture” view than many of the cognitive theories discussed earlier. They tend not to focus on individual cognitive processes, but rather classes of processes, or types of intelligence.

Sternberg’s (1977a, 1977b) componential analysis of intelligence test items led him eventually to consider how information processing might interface with the environmental context in which it is displayed. By the mid-1980s, Sternberg (1985c, 1988) had proposed his triarchic theory of intelligence. This theory had three separate aspects: (1) the mechanics of intelligence; (2) the continuum of experience; and (3) the fit of an individual to the environment.

The mechanics of intelligence refer to the actual cognitive processes responsible for intelligent behavior. Three types of information processes (some of which I discussed earlier) were delineated by Sternberg. First, there were performance components (e.g., Sternberg, 1977a, 1977b; Sternberg & Gardner, 1983). These were cognitive processes that operated upon data and produced solution to problems. Second, there were metacomponents (e.g., Sternberg, 1980). These were cognitive processes responsible for performance component selection, organization, and strategic processing (e.g., speed/accuracy tradeoffs, self-terminating versus exhaustive processing, etc.). Third, there were knowledge acquisition components (Sternberg & Powell, 1983). These were components specifically involved with the acquisition of new information. They were highlighted in studies of the acquisition of vocabulary from surrounding context. Together, the three types of components (or information processes) were responsible for producing intelligence behavior in any particular context.

The continuum of experience refers to the fact that learning progresses from problems that are novel, to problems that are uncommon, to problems that are common, to problems that are routine. Speed and error rates for information processing components will correlate with intelligence, but only at two points along this continuum of experience: when problems are novel (Gardner & Sternberg, 1994) and when problems are so routine that they involve automatic processing (see Schneider & Shiffrin [1977] and Shiffrin & Schneider [1977] for a discussion of automaticity). According to the triarchic theory, when problems are novel, more intelligent individuals will display faster and less error prone componential processing. This is the typical finding on tests of fluid intelligence. With regard to automaticity (i.e., processing that is so routine that it consumes little to no attentional resources), more intelligent individuals are able to automate their componential processing more quickly than less intelligent individuals. Thus, with routine problems, more intelligent individuals are more likely to be fast and error free than less intelligent individuals, because more intelligent individuals are more likely to be relying on automatic processing.

The fit of the individual to the environment expands the triarchic theory, using the biological notions of adaptation and natural selection. According to Sternberg, more intelligent indivi-duals fit into their environments better than less intelligent individuals. This optimum fit can be accomplished in one of three ways: adaptation, selection, or shaping. In adaptation, the individual changes to better fit their environment. Thus, a student who arrives at graduate school with poor study skills may adapt to his new environment by improving his study skills. In selection, the individual may choose a new environment, if they are unable to fit into the current environment. Thus, a student who is unhappy in graduate school may choose a new occupation that doesn’t require graduate study. In shaping, the individual attempts to change the environment to better match her or his abilities. Thus, a graduate student with poor verbal skill may try to convince professors that courses need to include a quantitative, statistical component. It just so happens that such courses would better align with this student’s current skill set.

As you can see, the triarchic theory incorporates Sternberg’s cognitive theory (the componential analysis of skills), but moves beyond it by delineating environmental variables that also influence performance. The theory is quite broad. However, like most broad theories, predictions in particular situations are not always well specified. Sternberg’s more recent interests have included such wide ranging areas as practical intelligence (Wagner & Sternberg, 1985), creativity (Sternberg & Lubart, 1991, 1992), and wisdom (Sternberg, 1998).

Howard Gardner (1983, 1999) has proposed a “theory of multiple intelligences” that also strongly relies on the context in which cognitive processes are displayed. Gardner has long had an interest in the arts, and felt that the development of “artistic” skills was downplayed in theories of cognitive and developmental psychology. He later worked in the area of neuropsychology, and was struck by how brain injury could impair one skill while leaving others untouched. These experiences led him to propose (Gardner, 1983) seven relatively independent intelligences. The seven intelligences were: (1) linguistic intelligence, or the “sensitivity to spoken and written language” (Gardner, 1999, p. 41); (2) logical-mathematical intelligence, or the “capacity to analyze problems logically, carry out mathematical operations, and investigate issues scientifically” (Gardner, 1999, p. 42); (3) musical intelligence, or “skill in the performance, composition, and appreciation of musical patterns” (Gardner, 1999, p. 42); (4) bodily-kinesthetic intelligence, or the “potential of using one’s whole body or parts of the body (like the hand or the mouth) to solve problems or fashion products” (Gardner, 1999. p. 42); (5) spatial intelligence, or “the potential to recognize and manipulate the patterns of wide space… as well as the patterns of more confined areas” (Gardner, 1999, p. 42); (6) interpersonal intelligence, or “a person’s capacity to understand the intentions, motivations, and desires of other people and, consequently, to work effectively with others” (Gardner, 1999, p. 43); and (7) intrapersonal intelligence, or “the capacity to understand oneself, to have an effective working model of oneself—including one’s own desires, fears, and capacities—and to use such information effectively in regulating one’s own life” (Gardner, 1999, p. 43). Over time (Gardner, 1999), Gardner has added three new intelligences to the list: (1) naturalistic intelligence, or the “ability to discern patterns in nature” (Sattler, 2008, p. 234); (2) spiritual intelligence, or a “concern with cosmic or existential issues and recognition of the spiritual as an ultimate state of being” (Sattler, 2008, p. 234); and (3) existential intelligence, or a “concern with ultimate issues” (Sattler, 2008, p. 234).

In determining what does or does not constitute an intelligence, Gardner (1999) has relied upon eight sources of evidence derived from four different disciplinary backgrounds. From the biological sciences come the criteria of (1) the potential of isolation (or dissociation) by brain damage, and (2) an evolutionary history and evolutionary plausibility (does the intelligence serve a role in the evolution of our species?). From logical analysis come the criteria of (3) an identifiable core operation or set of operations (i.e., core cognitive operations), and (4) susceptibility to encoding in a symbol system (is the intelligence associated with its own symbol system?). From developmental psychology come the criteria of (5) a distinct developmental history, along with a definable set of expert “end-state” performances (i.e., ways of developing one’s intelligence to serve a particular role in society), and (6) the existence of idiot savants, prodigies, and other exceptional people. From traditional psychology come the criteria of (7) support from experimental psychological tasks (i.e., is there evidence of independence of operations or interference among operations?), and (8) support from psychometric findings. The more evidence that can be found, and the more sources of evidence that can be adduced, the more likely a set of skills will be termed an “intelligence.” As you can see from the list, context plays a major role in Gardner’s theory, as intelligences may serve different functions in different cultural contexts and at different points in both the individual’s developmental history, and the species’ evolutionary history.

Gardner’s theory of multiple intelligences has not been well received by traditional psychologists (see Brody, 1996, for a critical review). However, it is extremely popular with educators (H. Gardner, personal communication, April, 9, 2008), who embrace the idea that we can all be talented, though perhaps in different ways. Multiple intelligences can be criticized on two grounds. First, it ignores the so-called positive manifold—the fact that traditional tests of cognitive abilities are positively intercorrelated. This should imply that at least linguistic and logical-mathematical intelligences would be correlated rather than independent. Second, it expands the definition of intelligence beyond its original meaning. When Binet was developing his original intelligence tests, intelligence was defined as the ability to do well in school (or school-like environments). Gardner’s (1999) definition of intelligence is “a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture” (pp. 33–34). This is certainly a much broader definition of intelligence. Others have also expanded the use of the term intelligence, as in “practical intelligence” (Wagner & Sternberg, 1985) and “emotional intelligence” (Goleman, 1997; Salovey & Mayer, 1989–1990). But if intelligence can mean almost anything, does it mean anything at all? As Gardner has extended the meanings of intelligence, he also has diluted its meaning to a degree. In the end, the success of the theory of multiple intelligences will depend upon two things: (1) will new research support the distinctions it makes? and (2) will it inspire any new, successful applications (such as methods of instruction)?

While Piaget’s stage theory (Piaget, 1954, 1970, 1977; Piaget & Inhelder, 1969; see also Flavell, 1985 for a briefer review) is essentially a theory of child development, it is also a theory of the development of intelligence. It is both cognitive (it discusses thought processes) and contextual (it emphasizes the role of environment in stimulating cognitive growth). The theory is based in principles that Piaget derived from biology: namely, adaptation, assimilation, and accommodation. The child assimilates information from the environment using current ways of thinking about the world, which Piaget called schemes. As new mental structures develop, the child will eventually find a mismatch between environmental stimuli and current schemes (i.e., cognitive disequilibrium). This can cause the child to accommodate to the stimuli by creating new, more advanced schemes that are a better match to the environment.

Piaget’s model of the development of intelligence was qualitative, in that older children not only knew more than younger children, they knew differently than younger children. Children’s thought progressed through four basic stages. During the sensorimotor stage (approximately ages birth to two years), children understand their environment through sensation and motor operations. By the end of the sensorimotor period, children understand that objects continue to exist when out of sight (object permanence) and they can remember and imagine ideas and experiences (mental representation). This second achievement allows the development of language.

During the preoperational stage (approximate age two years to six years), children use symbolic thinking (including language) to understand the world. Imagination flourishes during this period. The child begins the preoperational stage with an egocentric point of view (they think others will see and experience things as they do); however, over the course of this period, children gradually decenter (they become able to take the point of view of others). Children possess a quasi-logic at this stage: they can reason about things, but only in a qualitative way.

During the concrete operational stage (approximately age seven years to eleven years), children learn to apply quantitative, logical operations to specific experiences or perceptions. During this stage children acquire the concepts of conservation, number, classification, and seriation. Children also begin to appreciate that many questions have specific, correct answers that can be arrived at through measurement and logical reasoning.

During the formal operational stage (approximately twelve years onward) the adolescent or adult begins to be able to think about abstractions and hypothetical ideas. It is no longer necessary for individuals at this stage to manipulate objects to arrive at the solution to a problem. The capacities developed during the formal operational stage make subjects such ethics, politics, and the social sciences much more interesting to students. Piaget hypothesized that not all individuals achieve full, formal operational thought.

While Piaget’s theory does not address psychometric g, it certainly addresses the development of the cognitive mechanisms that underlie much of formal reasoning and problem solving. It views the individual as someone who interacts with the environment, and who strives to have his or her thoughts (or schemes) in line with experience. It makes the strong claim that all individuals pass through the stages in the same order, with no one “skipping” any stages en route to higher stages. While many have criticized the exact ages given by Piaget for the attainment of specific skills, few have criticized his theory’s ability to describe children’s gradual acquisition of complex thought.

Cognitive-contextual theories attempt to embed cognitive processing in an environmental or cultural milieu. All of these theories go beyond the type of test performance studied by psychometricians, and some make strong claims (e.g., there are potentially 10 intelligences, or children pass through cognitive stages in a fixed order). All of the theories cited above make use of extensive evidence from many domains.

However, in most cases the link to the environment provides support for the theories without yielding testable claims. Sternberg’s three types of fit (i.e, adaptation, selection, and shaping) between individual and environment are mostly supported by anecdotal reports. Gardner’s theory of multiple intelligences makes use of neuropsychological and cultural evidence, but rarely makes predictions beyond the data adduced in support of theory. Piaget’s theory is wonderful description, but fails to predict who will reach a particular stage at a particular time. Most of these theories seem to have educational implications, but, again, the theory developers have mostly distanced themselves from curricular development based on their theories. The cognitive-contextual theories are, in essence, a promissory note. Only the future can tell us if they are able to live up to their grand claims.

Human intelligence clearly exists within the brain. Recent interest in cognitive neuropsychology has led to hopes that we will discover the particular biological and physiological mechanisms responsible for intelligence. I will review some of the biological theories put forward below; however, one needs to remember that the brain is an extremely complex organ, and our knowledge of its operation is still in its infancy.

Numerous studies have been conducted that have assessed the correlation between brain size and intelligence. In the early studies, brain size was measured via a more accessible surrogate such as head circumference (sometimes termed “perimeter”). In a review of 35 earlier studies comprising 56,793 individuals, Vernon, Wickett, Bazana, and Stelmack (2000) reported an n-weighted mean correlation of 0.191 between head size and intelligence. This correlation was not corrected for attenuation due to unreliability of measurement or restriction of range, so the theoretical relationship is likely higher.

Later studies substituted more accurate measures of actual brain volume (i.e., derived from CT or MRI scans) for head size measurements. The result was that the relationship between brain size and intelligence increased. In a review of recent brain volume and intelligence studies, Vernon, Wickett, Bazana, and Stelmack (2000) reported an n-weighted mean correlation of 0.381 based on 432 normal adults from 11 samples. Gignac, Vernon, and Wickett (2003) also report a review of 14 recent studies and find an n-weighted mean correlation of 0.37 between brain size and intelligence (note: these two reviews present overlapping studies). Again, these correlations are uncorrected for unreliability or restriction of range.

Clearly, brain size and intelligence are related. The size of the relationship increases when a more valid measure of brain size is substituted for a less valid one. Exactly how brain size determines intelligence is unclear from these studies. However, recent work suggests that genetics plays a role in the transmission of this relationship.

A study by Wickett, Vernon, and Lee (2000) examined the relationship between brain size and intelligence in 32 pairs of male, adult siblings. They found a within-family correlation of 0.229 between brain volume and g, and a between-family correlation of 0.366 between brain volume and g. This demonstrates that the relationship between brain size and intelligence exists within families, as well as between families. Such a within-family relationship is necessary to establish the influence of genetic factors. Correlations that exist only between families may be the result of environmental factors such as nutrition and socioeconomic status. The findings of Wickett, Vernon, and Lee (2000) are consistent with the hypothesis that both genetic and environmental factors are at play in the relationship between brain size and intelligence.

Hendrickson and Hendrickson (1980; A. E. Hendrickson, 1982; see Brody, 1996, for a review) presented a theoretical model that linked intelligence to the electroencephalogram (EEG) complexity. Basically, the model posits that more intelligent individuals have fewer errors in synaptic transmissions, and that this will lead to a more complex EEG wave pattern. EEG wave pattern complexity is determined by the length of a “string” superimposed over the wave form; more complex EEGs lead to longer string lengths.

Blinkhorn and Hendrickson (1980) found correlations of approximately 0.45 between EEG string length (from an auditory listening task) and the Advanced Progressive Matrices (uncorrected for restriction of range) in a sample of 33 psychology undergraduates. The actual size of the correlation depended upon details of which portion of the EEG was measured. D. E. Hendrickson (1982) found a correlation of 0.72 between string length and WAIS total IQ in a sample of 219 school aged children (mean age = 15.6 years).

While the neural transmission errors theory and EEG string length provide interesting insights into intelligence, this research seems to be just beginning. Not all attempts to replicate these findings have succeeded (e.g., Stough, Nettelbeck, and Cooper, 1990). More research is clearly needed to sort out the various methodological issues (Brody, 1992), and to further elucidate the relationships between neural transmission errors and the actual EEG recordings provided by participants.

Haier (2003; also reviewed in Vernon, Wickett, Bazana, & Stelmack, 2000) has presented evidence that the brain’s rate of glucose metabolism (GMR), as measured through positron emission tomography, is negatively correlated with intelligence. For instance, Haier et al. (1988) found significant negative correlations between GMR and performance on the Ravens Advanced Progressive Matrices in a group of eight normal adults. These negative correlations between GMR and intelligence-related measures have been replicated by other investigators (Parks et al., 1988; Boivin et al., 1992).

Haier (2003) interprets these findings as resulting from brain efficiency: more intelligent individuals require less neuronal activity (and therefore less glucose metabolism) to solve intellectual problems than do less intelligent individuals. Haier et al. (1992) explored this theory further in a learning task. Eight subjects underwent a PET scan while playing a computer game (i.e., tetris). At this point, the subjects were novice players. After four to eight weeks of practice, subjects underwent a second PET scan while playing the computer game. Results revealed that improvements in game play were related to decreases GMR. Haier et al. (1992) conclude that practice results in subjects learning what areas of the brain not to use, and this results in a decrease in GMR.

Haier’s brain efficiency model of intelligence relates brain metabolism to performance; however, it must be noted that most of the studies involve very small sample sizes (primarily due to the cost of conducting PET scanning research). While the findings have been replicated, it would be desirable to see the results replicated in a large, representative sample of normal adults.

Jensen (1982, pp. 127–131) proposed a “neural oscillation model” to account for his finding that reaction time (RT) in a simple/choice reaction time task increases linearly with the bits of information in the stimulus display (see earlier in this chapter). Jensen hypothesized the nervous system used a hierarchical binary network to process information in the simple/choice reaction time task. To this, he coupled the assumption that each node in the network was subject to an oscillatory cycle between active periods (when the node could fire and thereby transmit information) and refractory periods (when the node could not fire). Brighter people were assumed to have shorter neural oscillation cycles, and, therefore, were able return to a “firing” state more quickly than less bright people. Jensen demonstrated that such a model could account for his basic findings: (1) linear increase in RT with increasing bits of information; and (2) increasing RT standard deviation with increasing bits of information.

Intriguing as Jensen’s neural oscillation theory may be, it has no direct support in neurophysiological data. The theory was developed based on a mathematical model (the binomial expansion), and is a distant extrapolation from the experimental data. At best, it may be viewed as a physiological metaphor of performance on the task.

The various findings described above all provide evidence on relationships between aspects of the brain and intelligence or intellectual functioning. Not surprisingly, the findings are only loosely tied to the theories of intelligence proposed by the various researchers. Cognitive neuroscience is a relatively young field, and the causal links between brain structure and function, on the one hand, and intellectual performance, on the other, have not been fully elucidated. However, as the field progresses, the causal links are likely to become more clear, and I would expect to see progress on biological theories of intelligence.

I should caution that we live in time when reductionism is often equated with science. It is difficult not to be impressed when one sees a multimillion dollar fMRI machine. However, our understanding of intelligence is likely to progress on many levels. There is no one level of analysis that possesses a golden key to understanding.

In this chapter, I have reviewed four classes of theories of intelligence: psychometric, cognitive, cognitive-contextual, and biological. Psychometric theories of intelligence are relatively mature. The picture of structural relationships among abilities is best represented as a hierarchy, with g at the top, group factors in the middle levels, and relatively specific factors at the bottom. The choice between the British hierarchical model and Carroll’s (1993, 1997) three stratum theory is primarily a matter of personal preference.

Cognitive theories of intelligence have attempted to elucidate the cognitive processing underlying intelligent behavior. To the extent that consensus exists, it is that intelligent behavior is related to broad processes implicated in a large number of tasks. Examples of these processes would be working memory, attention, rapidity of processes in STM, resistance to interference in memory, general reasoning components, and strategy selection and execution (i.e., metacomponents). Speed of information processing has also been shown to be important, though its importance may be highlighted only in certain special situations (e.g., during aging, and with novel problems). There is no single agreed-upon cognitive theory of intelligence as there is in the psychometric domain.

Cognitive-contextual theories expand cognitive theories by embedding them within a context, whether that context is environmental or cultural. These theories stress that cognitive processes can only be developed and valued within an environment that selects for them, supports them, or within a culture that values them. In this way, intelligence is an interaction between person and environment, with the environment shaping the individual’s cognitive processes, and/or the individual molding his or her environment or culture. Although these theories emphasize the role of environment (broadly defined), they have not been very specific in terms of making forward-looking predictions. This can be seen as a weakness of this class of theory.

Biological theories attempt to explain intelligence in terms of brain structure and function. Numerous investigators have pointed out brain-related correlates of intelligence, but difficulties remain. Some of the relationships have been difficult to replicate. Furthermore, the theoretical explanations for the relationships are often ad hoc. Given the complexity of the brain, and the relative newness of the field of cognitive neuroscience, this is not surprising. Hopefully, as our knowledge of the brain expands, theories can be developed that explain several of the correlational relationships within a single framework.

The ability of intelligence tests to predict success in school and school-like environments is one of the great achievements of psychology. A great deal of time and effort has gone into developing the intelligence tests that educational institutions rely on. Psychology needs to spend at least as much time and effort to develop theories of intelligence that can accommodate the numerous influences of cognitive processes, biological processes, and environmental factors. Many parts of the picture are already in place, but a grand theory of human intelligence is still some ways off in the future.

In this last section, I will pose some questions for future research.

1.

Can cognitive theories account for the relationships displayed by psychometric theories? Is it possible to show that tests which group together in psychometric theories do so because they share cognitive processes that are sources of individual differences in performance?

2.

Can cognitive-contextual theories make forward-looking predictions concerning which cognitive processes will be selected or valued in a particular context? It would certainly help if this class of theories could be made specific enough so that they can make concrete predictions, rather than simply describe current situations after the fact.

3.

Can cognitive-contextual theories specify a mechanism by which cognitive processes are modified via context? This may simply be natural selection, but demonstrating how cognitive processes are modified by the environment would seem to be an important aspect of any cognitive-contextual theory.

4.

Can the numerous brain/intelligence correlations be unified under a single biological theory of intelligence? Parsimony would dictate that we will eventually need a single biological theory of intelligence that is more specific than a statement such as “more intelligent individuals have more neurons or faster neurons.”

5.

Can biological correlates of intelligence be tied to cognitive information processes? Although the brain may be the physical substrate of intelligence, this intelligence must be manifest through cognitive information processes. Future theories will need to tie these two levels of analysis together.

Progress on any of these questions would constitute substantial progress toward a more comprehensive theory of human intelligence.

Angell,
J. R. (
1908
).
The doctrine of formal discipline in light of the principles of general psychology.
 
Educational Review
, 36, 1–14.

Atkinson,
R. C., & Shiffrin, R. M. (
1968
). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.),
The psychology of learning and motivation: Advances in research and theory
(Vol 2). New York: Academic Press.

Atkinson,
R. C., & Shiffrin, R. M. (
1971
).
The control of short-term memory.
 
Scientific American
, 225, 82–90.

Baddeley,
A. D. (
1986
).
Working memory
. Oxford, UK: Clarendon Press.

Baddeley,
A. D., & Hitch, G. J. (
1974
). Working memory. In S. Dornic (Ed.),
Attention and performance VI
. Hillsdale, NJ: Erlbaum.

Binet,
A., & Henri, V. (
1896
).
La psychologie individuelle.
 
L’Année Psychologique
, 2, 411–465.

Binet,
A., & Simon, T. (
1905
).
Méthods nouvelles pour le diagnostic du niveau intellectuel des anormaux.
 
L’Année Psychologique
, 11, 191–244.

Binet,
A., & Simon, T. (
1908
).
Le développement de l’intelligence chez les enfants.
 
L’Année Psychologique
, 14, 1–94.

Binet,
A. (
1911
).
Nouvelle recherches sur la mesure du niveau intellectual chez les enfants d’ecole.
 
L’Année Psychologique
, 17, 145–201.

Blinkhorn,
S. F., & Hendrickson, D. E. (
1982
).
Average evoked responses and psychometric intelligence.
 
Nature
, 295, 596–597.

Boivin,
M. J., Giordani, B., Berent, S., Amato, D. A., Koeppe, R. A., Buchtel, H. A., Foster, N. L., & Kuhl, D. E. (
1992
).
Verbal fluency and positron emission tomographic mapping of regional cerebral glucose metabolism.
 
Cortex
, 28, 231–239.

Brody,
N. (
1992
).
Intelligence
(2nd ed.). New York: Academic Press.

Brody,
E. B., & Brody, N. (
1976
).
Intelligence: Nature, determinants, and consequences
. New York: Academic Press.

Brown,
W., & Thomson, G. (
1921
).
The essentials of mental measurement
. Cambridge, U. K.: Cambridge University Press.

Burt,
C. (
1940
).
The factors of the mind
. London: University of London Press.

Carroll,
J. B. (
1982
). The measurement of intelligence. In Sternberg, R. J. (Ed.).
Handbook of human intelligence
. Cambridge, U. K.: Cambridge University Press.

Carroll,
J. B. (
1993
).
Human cognitive abilities: A survey of factor analytic studies
. Cambridge, U. K.: Cambridge University Press.

Carroll,
J. B. (
1997
). The three-stratum theory of cognitive abilities. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.),
Contemporary intellectual assessment: Theories, tests, and issues
. New York: Guilford.

Cattell,
J. McK. (
1890
).
Mental test and measurements.
 
Mind
, 15, 373–381.

Cattell,
J. McK., & Farrand, L. (
1896
).
Physical and mental measurements of the students of Columbia University.
 
Psychological Review
, 6, 618–648.

Cattell,
R. B. (
1941
).
Some theoretical issues in adult intelligence testing.
 
Psychological Bulletin
, 38, 592.

Cattell,
R. B. (
1963
).
Theory of fluid and crystallized intelligence: A critical experiment.
 
Journal of Educational Psychology
, 54, 1–22.

Cattell,
R. B. (
1971
).
Abilities: Their structure, growth and action
. Boston: Houghton-Mifflin.

Cronbach,
L. J. (
1957
).
The two disciplines of scientific psychology.
 
American Psychologist
, 12, 671–684.

Cronbach,
L. J. (
1970
).
Essentials of psychological testing
(3rd ed.). New York: Harper & Row.

Darwin,
C. R. (
1859
).
On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life
. London: John Murray.

Dreary,
I. J. (
2003
). Reaction time and psychometric intelligence: Jensen’s contributions. In H. Nyborg (Ed.).
The scientific study of general intelligence: Tribute to Arthur R. Jensen.
Amsterdam: Pergamon.

Eysenck,
H. J. (
1967
).
Intelligence assessment: A theoretical and experimental approach.
 
British Journal of Educational Psychology
, 37, 81–97.

Flavell,
J. H. (
1985
).
Cognitive development
(2nd ed.). Englewood Cliffs, NJ: Prentice Hall.

Galton,
F. (
1869
).
Hereditary genius: An inquiry into its laws and its development
. London: Macmillan.

Galton,
F. (
1874
).
English men of science: Their nature and nurture
. London: Macmillan.

Galton,
F. (
1883
).
Inquiries into human faculty and its development
. London: Macmillan.

Gardner,
H. (
1983
).
Frames of mind: The theory of multiple intelligences
. New York: Basic Books.

Gardner,
H. (
1999
). Intelligence reframed: Multiple intelligences for the 21st century. New York: Basic Books.

Gardner,
H. (April 9,
2008
).
Personal communication.

Gardner,
M. K., & Clark, E. (
1992
). The psychometric perspective on intellectual development in childhood and adolescence. In Sternberg, R. J., & Berg, C. A. (Eds.).
Intellectual development
. Cambridge, UK: Cambridge University Press.

Gardner,
M. K., & Sternberg, R. J. (
1994
). Novelty and intelligence. In Sternberg, R. J., & Wagner, R. K. (Eds.),
Mind in context: Interactionist perspectives on human intelligence
. Cambridge, UK: Cambridge University Press.

Gignac,
G., Vernon, P. A., & Wickett, J. C. (
2003
). Factors influencing the relationship between brain size and intelligence. In H. Nyborg (Ed.).
The scientific study of general intelligence: Tribute to Arthur R. Jensen.
Amsterdam: Pergamon.

Goleman,
D. (
1997
).
Emotional intelligence
. New York: Bantam Books.

Gorsuch,
R. L. (
1983
).
Factor analysis
(2nd ed.). Hillsdale, NJ: Erlbaum.

Grudnik,
J. L., & Kranzler, J. H. (
2001
).
Meta-analysis of the relationship between intelligence and inspection time.
 
Intelligence
, 29, 525–537.

Guilford,
J. P. (
1964
).
Zero intercorrelations among tests of intellectual abilities.
 
Psychological Bulletin
, 61, 401–404.

Guilford,
J. P. (
1967
).
The nature of human abilities
. New York: McGraw-Hill.

Guilford,
J. P. (
1977
).
Way beyond the IQ: Guide to improving intelligence and creativity
. Buffalo: Creative Education Foundation.

Guilford,
J. P. (
1982
).
Cognitive psychology’s ambiguities: Some suggested remedies.
 
Psychological Review
, 89, 48–59.

Guilford,
J. P., & Hoepfner, R. (
1971
).
The analysis of intelligence
. New York: McGraw-Hill.

Hagen,
J. (
2007
).
The label mental retardation involves more than IQ scores: A commentary on Kanaya and Ceci (2007).
 
Child Development Perspectives
, 1, 60–61.

Haier,
R. J. (
2003
). Positron emission tomography studies of intelligence: From psychometrics to neurobiology. In H. Nyborg (Ed.).
The scientific study of general intelligence: Tribute to Arthur R. Jensen.
Amsterdam: Pergamon.

Haier,
R. J., Siegel, B. V., Nuechterlein, K. H., Hazlett, E., Wu, C. J., Peak, J., Browning, H. L., & Buchsbaum, M. S. (
1988
).
Cortical glucose metabolic rate correlates of abstract reasoning and attention studied with positron emission tomography.
 
Intelligence
, 12, 199–197.

Haier,
R. J., Siegel, B. V., MacLachlan, A., Soderling, E., Lottenberg, S., & Buchsbaum, M. S. (
1992
).
Regional glucose metabolic changes after learning a complex visuospatial/motor task: A PET study.
 
Brain Research
, 570, 134–143.

Hebb,
D. O. (
1942
).
The effect of early and late brain injury upon test scores, and the nature of normal adult intelligence.
 
Proceedings of the American Philosophical Society
, 85, 275–292.

Hemmelgarn,
T. E., & Kehle, T. J. (
1984
).
The relationship between reaction time and intelligence in children.
 
School Psychology International
, 5, 77–84.

Hendrickson,
A. E. (
1982
). The biological basis of intelligence. Part I: Theory. In H. J. Eysenck (Ed.),
A model for intelligence
. Berlin: Springer-Verlag.

Hendrickson,
D. E. (
1982
). The biological basis of intelligence. Part II: Measurement. In H. J. Eysenck (Ed.),
A model for intelligence
. Berlin: Springer-Verlag.

Hendrickson,
D. E., & Hendrickson, A. E. (
1980
).
The biological basis of individual differences in intelligence.
 
Personality and Individual Differences
, 1, 3–33.

Hick,
W. E. (
1952
).
On the rate of information gain.
 
Quarterly Journal of Experimental Psychology
, 4, 11–26.

Hilgard,
E. R., Atkinson, R. C., & Atkinson, R. L. (
1971
).
Introduction to psychology
(5th ed.). New York: Harcourt Brace Jovanovich.

Horn,
J. L. (
1968
).
Organization of abilities and the development of intelligence.
 
Psychological Review
, 75, 242–259.

Horn,
J. L. (
1985
). Remodeling old models of intelligence: Gf – Gc theory. In B. B. Wolman (Ed.),
Handbook of intelligence
. New York: Wiley.

Horn,
J. L., & Cattell, R. B. (
1966
).
Refinement and test of the theory of fluid and crystallized intelligence.
 
Journal of Educational Psychology
, 57, 253–270.

Horn,
J. L., & Cattell, R. B. (
1967
).
Age differences in fluid and crystallized intelligence.
 
Acta Psychologica
, 26, 107–129.

Hunt,
E. (
1978
).
Mechanics of verbal ability.
 
Psychological Review
, 85, 109–130.

Hunt,
E., Frost, N., & Lunneborg, C. (
1973
). Individual differences in cognition: A new approach to intelligence. In G. Bower (Ed.),
The psychology of learning and motivation: Advances in research and theory
(Vol. 7). New York: Academic Press.

Hunt,
E., Lunneborg, C., & Lewis, J. (
1975
).
What does it mean to be high verbal?
 
Cognitive Psychology
, 7, 194–227.

Hyman,
R. (
1953
).
Stimulus information as a determinant of reaction time.
 
Journal of Experimental Psychology
, 45, 188–196.

Jensen,
A. R. (
1982
). Reaction time and psychometric g. In H. J. Eysenck (Ed.),
A model for intelligence
. Berlin: Springer-Verlag.

Jensen,
A. R. (
1987
). Individual differences in the Hick paradigm. In P. A. Vernon (Ed.),
Speed of information processing and intelligence
. Norwood, NJ: Ablex.

Jensen,
A. R. (
1998
).
The g factor: The science of mental ability
. New York: Praeger.

Jensen,
A. R., & Munro, E. (
1979
).
Reaction time, movement time, and intelligence.
 
Intelligence
, 3, 121–126.

Kranzler,
J. H., & Jensen, A. R. (
1989
).
Inspection time and intelligence: A meta-analysis.
 
Intelligence
, 13, 329–347.

Kyllonen,
P. C., & Christal, R. E. (
1990
).
Reasoning ability is (little more than) working-memory capacity?!
 
Intelligence
, 14, 389–433.

Lally,
M., & Nettelbeck, T. (
1980
).
Intelligence, inspection time, and response strategy.
 
American Journal of Mental Deficiency
, 84, 553–560.