A Neuropsychological Framework for Diagnosing Intellectual Disability

A Neuropsychological Framework for Diagnosing Intellectual Disability

Noah K. Kaufman, Ph.D., FACPN, ABPdN

Diplomate American Board of Professional Neuropsychology

Diplomate American Board of Pediatric Neuropsychology

Sara Holland, MA, MA, R. Psych

Lisa Bolshin, MA

Author Note

Noah K. Kaufman is Clinical Assistant Professor Department of Psychiatry at

Texas Tech University Health Sciences Center, Paul L. Foster School of Medicine, El Paso, TX and runs the Center for Neuropsychological Studies in Las Cruces, NM.

Correspondence concerning this article should be addressed to Dr. Noah K. Kaufman:  noahk@zianet.com; www.noah-kaufman.com

Neuropsychology and the Diagnosis of Intellectual Disability

Neuropsychologists—more than any other mental health professional—are optimally trained to diagnose Intellectual Disability (ID). While it is possible for psychiatrists, neurologists, primary care physicians, master’s-level counselors, educational diagnosticians, garden-variety psychologists, or highly-skilled forensic psychologists to apply the ID diagnosis, there is compelling justification to conclude that these other professionals are at a tremendous disadvantage as assessors of ID, especially the way the ID definition is evolving.

To illustrate this evolution in mental disorder nomenclature, consider that no previous version of the Diagnostic and Statistical Manual of Mental Disorders (DSM) made reference to neuropsychology in its definitions of ID (previously called Mental Retardation [MR]; currently also termed Intellectual Developmental Disorder). Now, however, the fifth edition of the DSM (DSM-5; American Psychiatric Association; APA, 2013) does. It states, “Individual cognitive profiles based on neuropsychological testing are more useful for understanding intellectual abilities than a single IQ score. Such testing may identify areas of relative strengths and weaknesses…” (APA; p. 37). Similarly, previous editions of the DSM did not utilize the prefix neuro, whereas the most updated version includes two entire categories of diagnoses with the neuro prefix: Neurodevelopmental Disorders (APA, pp. 31-86) and Neurocognitive Disorders (APA, pp. 591-644). Approximately 15% of the DSM-5 is now explicitly devoted to mental disorders that are openly regarded as neurogenic, underscoring the role of the central nervous system in these diagnoses. It follows, therefore, that neuropsychologists are ideally prepared to navigate these neurodiagnostic waters.

 

 

Neuropsychological Methods are Ideal for Diagnosing Intellectual Disability

The first diagnostic prong for a DSM-5 diagnosis of ID includes “Deficits in intellectual functions, such as reasoning, problem solving, planning, abstract thinking, judgment, academic learning, and learning from experience, confirmed by both clinical assessment and individualized, standardized intelligence testing” (APA, 2013, p. 33). Within these first diagnostic criteria are seven domains of functioning, all of which can be operationally defined as scores on tests found in the neuropsychologist’s toolbox. For example, we can directly measure “academic learning” using individually-administered tests of academic achievement, of which there are many to choose from. Similarly, deficits in “judgment” can be demonstrated through performances on neuropsychological tests, specifically designed to characterize the nuances of the judgment construct. For example, consider the Wisconsin Card Sorting Test (WCST), the Iowa Gambling Task (IGT), the Category Test, the Word Context Test, or other similar tests frequently used by neuropsychologists. By applying these methods to characterize “judgment,” we tremendously elevate our awareness of the examinee’s underlying abilities. The same can be said about “abstract reasoning,” “planning,” “problem-solving,” and so forth because neuropsychologist routinely use tests of these mental faculties, each of which are built into the newest ID definition.

Now consider the second diagnostic prong for ID:

Deficits in adaptive functioning that result in failure to meet developmental and sociocultural standards for personal independence and social responsibility. Without ongoing support, the adaptive deficits limit functioning in one or more activities of daily life, such as communication, social participation, and independent living, across multiple environments, such as home, school, work, and community. (APA, 2013, p. 33; italics added)

To the extent that the territory of the first ID prong has been thoroughly charted—through the use of domain-appropriate tests of neuropsychological functioning (e.g., a Towers test used to measure the “planning” domain)—the neuropsychologist is in an advantageous position to formulate an opinion about the examinee’s capacity to communicate, to participate in society, and to live independently.

For example, imagine that the results of neuropsychological assessment reveal an examinee with deficits in communication skill: a sub-par Verbal IQ; a paucity of verbal fluency; dysarthria; and unusually low scores on tests of auditory-verbal encoding and retrieval. The neuropsychologist can rely upon this objective information as a point of reference when considering family- or teacher-ratings of examinee adaptive (or maladaptive) behavior, the traditional way of sizing up adaptive functioning. By taking this approach to measurement, the neuropsychologist greatly enhances the validity of the adaptive functioning construct by incorporating different measurement methods to more thoroughly characterize examinees’ communication skills (see Campbell & Fiske, 1959, for technical justification of this measurement approach).

Correspondingly, the neuropsychologist who directly evaluates social cognition or theory of mind, using available neuropsychological instruments (e.g., NEPSY-II subtests [Korkman, Kirk, & Kemp, 2007]; Advanced Clinical Solutions Social Cognition subtests [Holdnack & Drozdick, 2009]), has a leg up on the clinician making diagnostic decisions about an examinee’s capacity for “social participation”. In doing this, the neuropsychologist is able to tighten the grasp on this important facet of examinees’ adaptive functioning.

Likewise, neuropsychologists routinely administer tests of independent living, the results of which can readily be applied toward an opinion about an individual’s “independent living” skills. Commonly utilized tests of independent living include the following: Street Survival Skills Questionnaire (Linkenhoker & McCarron, 1993); Independent Living Scale (Loeb, 2003); Texas Functional Living Scale (Cullum, Munro, Weiner, & Saine, 2009); and Bill Payment subtest of the Neuropsychological Assessment Battery (NAB; White & Stern, 2003). So like “communication skills” and “social participation”, neuropsychologists can objectively approach the “independent living” domain of adaptive functioning with much more than ratings from others who may not even be capable of providing valid adaptive-functioning data (e.g., Gartstein, Bridgett, Dishion, & Kaufman [2009] discuss what can happen when depressed mothers provide ratings on the functioning of their children).

Borrowing from the 1970s television show, The Six Million Dollar Man, “We have the technology” to psychometrically measure the contours and surface area of the various constructs composing the ID diagnosis. Neuropsychological methods are—if not “faster”—certainly “stronger” and “better” than alternative, less-rigorous ways of diagnosing ID.

Neuropsychological Methods are Ideal for Characterizing Intellectual Disability Severity

The foregoing discussion casts light on neuropsychological means to arrive at a diagnosis of ID. Well, neuropsychologists frequently use the same methods to enhance clarity on the severity of the ID, which—in DSM-5—is based on adaptive functioning. For instance, consider the four ID severity levels in DSM-5, found in Table 1 on pages 34-36 (APA, 2013). Within this table, we quickly stumble into wording like this:

“…abstract thinking, executive function (i.e., planning, strategizing, priority setting, and cognitive flexibility), and short-term memory, as well as functional use of academic skills (e.g., reading, money management), are impaired” (p. 34)

“…there may be difficulty in accurately perceiving peers’ social cues” (p. 34)

“…social judgment is immature for age…” (p. 34)

“…supports typically involve…money management” (p. 34)

“For adults, academic skill development is typically at an elementary level…” (p. 35)

“Spoken language…is much less complex than that of peers” (p. 35)

“…individuals may not perceive or interpret social cues accurately” (p. 35)

“Social judgment and decision-making abilities are limited…” (p. 35)

“The individual generally has little understanding of written language or of concepts involving numbers, quantity, time, and money” (p. 36)

“Spoken language is quite limited in terms of vocabulary and grammar” (p. 36).

Upon comparing the DSM-5 criteria for ID severity with the first two prongs of the ID diagnosis—both discussed above—it is not hard to see that the very same neuropsychological instruments used to measure “Deficits in intellectual functions” (prong #1) and “Deficits in adaptive functioning” (prong # 2) also measure the constructs embedded within the ID severity domain.

For example, to enhance our understanding of “social judgment” or the ability to correctly perceive “peers’ social cues”, we might rely on the results from our testing of social cognition, theory of mind, or even facial memory. Meanwhile, “cognitive flexibility” can be characterized with the aforementioned WCST and/or IGT, with the color-word interference test, Trails B, or many other neuropsychological pruning knifes. Next, neuropsychologists might use the Wechsler Individual Achievement Tests or the Woodcock-Johnson Tests of Academic Achievement to directly gauge “academic skills”. And “short-term memory” is routinely measured using instruments like the California Verbal Learning Test, the Rey Complex Figure Test, and so forth.

In a nutshell, the DSM-5 operational definition of ID is begging for neuropsychologists to use their methods (a) to substantiate an examinee’s ID diagnosis; (b) to scientifically question the application of an ID diagnosis in someone who may not convincingly meet diagnostic criteria; or even (c) to highlight the failure of the slapdash clinician who did not apply the ID diagnosis when its use is in fact justified. Put simply, the entire ID diagnostic process is enriched and made better by encouraging a neuropsychologist to weigh in on matters.

IQ Tests Differ

Thus far we have emphasized the role of neuropsychological tests and methods in the diagnosis of ID without addressing one of the main ingredients in the diagnostic process: the actual IQ test. Arguably, the most important thing to appreciate here is that not all IQ tests are the same.

To exemplify, we have abbreviated-battery IQ tests, as compared to full-battery IQ tests. We have IQ tests administered by an individual examiner to an individual examinee, which is different from the group-administered IQ test. There are nonverbal IQ tests, some of which do not involve any words—from the examiner or from the examinee—and others of which only involve verbal directions from the examiner, with entirely nonverbal responses required of the examinee. Therefore, we sail off course the moment we assume all measures of “IQ” are equal, because they are not all equal.

Problems with Abbreviated-Battery IQ Tests

To illustrate the difference between abbreviated- and full-battery IQ tests, let us first consider what Axelrod (2002) had to say, based on his comparison of the Wechsler Abbreviated Scale of Intelligence (WASI) to the (full-battery) Wechsler Adult Intelligence Scale-3rd Edition (WAIS-III):

The reader is encouraged to use caution when deciding to use any short form [of an IQ test], as they are usually less stable than are the full versions of the same measure. With regard to the WASI, the utility of estimating VIQ and PIQ with the WASI is difficult to endorse when less than one half of the clinical cases [in Axelrod’s study] obtained scores within 6 points of their WAIS-III VIQ and PIQ scores… Finally, if the clinician’s goal is to obtain an accurate estimation of general intellectual functioning, the current results suggest that the WASI should not be used in the assessment of individual patients. This conclusion is particularly noteworthy when the margin of error of 6 points captures less than two thirds of the sample for FSIQ.” (p. 22)

Now consider what Strauss, Sherman, and Spreen (2006) articulated in their review of the WASI: “…the WASI should not take the place of a comprehensive examination. Rather, its use should be reserved for screening, for research projects, when a global estimate of intelligence is needed, or when time constraints are an issue…” (pp. 282-283). Finally, built into the DSM-5 (APA, 2013) definition of ID is the following admonition: “Invalid scores may result from the use of brief intelligence screening tests or group tests…” (p. 37).

The respective messages conveyed by Axelrod (2002), Strauss, et al. (2006), as well as by the authors of the DSM-5 (APA, 2013) are clear and easy to understand: brief, abbreviated-battery IQ tests like the WASI are not equivalent to a Wechsler (or other) full-battery IQ tests. If examiners refer to abbreviated-battery IQ scores as “IQ” scores—which happens all the time in clinical, educational, and forensic practice—concern arises about whether or not the “IQ” score is accurately representing what IQ scores were, at least according to Wechsler, originally meant to convey: “Wechsler (1944) defined intelligence as the ‘capacity of the individual to act purposely, to think rationally, and to deal effectively with his environment’” (Wechsler Adult Intelligence Scale [WAIS-IV] Manual, 2008, p. 3).

To further illustrate why abbreviated-battery IQ tests are not equal to full-battery IQ tests, it is helpful to consider the psychometric features of abbreviated- versus full-battery IQ tests. Namely, the standard error of measurements (SEMs) for abbreviated-battery IQ tests are considerably larger than is the case for full-battery IQ tests, which is important because we use SEMs to create confidence intervals around our IQ scores. The larger the SEM, the wider the confidence intervals around the IQ score, which puts the examiner in the undesirable position of having less confidence in their findings.

Accordingly, an estimate of one’s IQ, accompanied by roomy confidence intervals—say, from 74 to 92—is like being 95% confident that you will have a pleasant day or get hit by a truck while crossing the street. Or, to use a weather analogy, it is like being 95% confident that the temperature on your day off will be somewhere between 12 degrees and 63 degrees Fahrenheit. With large SEMs, the IQ test score is not all that helpful, which defeats the purpose of using such tests in the first place, a fundamental but important topic nicely discussed by Dahlstrom (1993): “[The] fallibility in the judgments made by humans about fellow humans is one of the primary reasons that psychological tests have been developed and applied in ever-increasing numbers over the past century” (p. 393).

The point to take away from all this is that abbreviated-battery IQ tests, with their larger SEMs, result in “IQ” scores that, at one end of the continuum, are worthless, and, at the other end of the continuum, remain misleading.


 

Problems with Nonverbal IQ Tests

What about “nonverbal” IQ tests? Like abbreviated- and full-battery IQ tests, not all nonverbal IQ tests are the same. Some are extremely short (e.g., the Test of Nonverbal Intelligence [Brown, Sherbenou, & Johnson, 2010]; the Leiter scale [Roid, Miller, & Koch, 2013]); others involve receptive language skills (e.g., the Comprehensive Test of Nonverbal Intelligence [Pearson, 2003]); some tap into different domains of nonverbal test content (e.g., the Universal Test of Nonverbal Intelligence; McCallum, 2003); while others contain slightly different versions of the same thing (e.g., the Test of Nonverbal Intelligence [Brown, Sherbenou, & Johnsen, 2010]); and the list goes on. In the final analysis, there is reason to question the value of nonverbal IQ tests, as articulated by Ortiz (2011):

…nonverbal testing is rather a misnomer; it is probably better characterized as language-reduced assessment. This is because no matter the test, its use in any evaluation requires that the examiner and examinee be able to communicate with each other… Nonverbal testing may reduce the language barrier, but it does not eliminate it. (p. 307)

…nonverbal tests are often used to derive a score that will serve as an indicator of an individual’s general intelligence. Such a practice, especially in the context of [Specific Learning Disorder] evaluation, is problematic…the majority of referrals for SLD evaluation are based on problems in language arts, particularly reading. This means that in terms of evaluating the cognitive deficits most likely to responsible for reading difficulties, and assessment for SLD would need to include testing for those abilities most related to reading, including auditory processing and crystallized knowledge. (pp. 307-308)

So not only are “nonverbal” IQ tests not exactly nonverbal, but they purposely avoid measuring a domain of mental functioning that is, more often than not, extremely important. Consider a child with a history of speech delays, active dysarthria, a feeble sense about the rules of grammar, and an impoverished vocabulary. Would it not be extremely helpful to chart this—probably barren—neurocognitive territory? Is this not exactly what we want to understand?

To summarize, abbreviated-battery IQ and nonverbal IQ tests are not yielding IQ scores that can realistically be swapped out with full-battery IQ test scores. So, not all IQ test scores are equal, especially if we remain loyal to Wechsler’s aforementioned original purpose of IQ testing: “[to measure the] capacity of the individual to act purposely, to think rationally, and to deal effectively with his environment.” Therefore, if we are going to make statements about “IQ,” the full-battery, full-scale IQ is always going to outperform the abbreviated-battery composite IQ score or the nonverbal IQ composite, in the same way that spending a year as a foreign exchange student in France is better than taking a high school French class.

Problems with Full-Scale IQ Scores

That said, full-battery IQ scores have so many ingredients that relying on an overall, full-battery IQ score can create interpretive problems if we are hoping to isolate the nature of the examinee’s cognitive strengths and weaknesses. So, if isolation of cognitive processes is the only goal, full-scale, full-battery IQ scores are going to be of limited value because they average everything into a single number, also known as g. Taking this perspective to the extreme, some neuropsychologists have even asserted that all IQ testing is without any purpose: “In sum, ‘IQ’ as a score is inherently meaningless and not infrequently misleading as well. ‘IQ’—whether concept, score, or catchword—has outlived whatever usefulness it may have once had and should be discarded” (Lezak, Howieson, & Loring, 2004; p. 22). Even if one agrees that full-battery, full-scale IQ scores fail to pinpoint the individual’s exact mental deficits, does this mean that IQ testing should—as some are suggesting—be abandoned entirely?

Our position in this paper is that full-battery IQ tests, if used correctly, remain indispensable neuropsychological tools, particularly when diagnosing ID. Why? For various reasons, including these: general intelligence predicts job performance and overall psychological well-being, which is not true of more pure measures of cognitive functioning (Coalson & Raiford, 2008); IQ testing informs us about an individual’s ability to function in their job (Hunt & Madhyastha, 2012; Kuncel, Ones, & Sackett, 2010; Schmidt, 2014); IQ scores tell us what to expect about a person’s mental health, physical health, and the likelihood that they will engage in healthy behavior (Johnson, Corley, Starr, & Deary, 2011; Wrulich et al., 2014); IQ scores inform us about how an individual will do in academic settings (Deary & Johnson, 2010; Deary, Strand, Smith, & Fernandes, 2007; Johnson, Deary, & Iacono, 2009; Kaufman, Reynolds, Liu, Kaufman, & McGrew, 2012; Nelson, Canivez, & Watkins, 2013); full-scale IQs have been found to be the best predictor of success in psychotherapy (Reynolds, Castillo, & Horton Jr., 2008; Travers, 1977; Luborsky, Auerbach, Chandler, Cohen, & Bachrach, 1971; Lezak, 1995); IQ testing is a necessary part of Atkins-related forensic cases; and so on. Stated otherwise, the full-battery, full-scale IQ score is extremely useful as soon as we set foot outside of our laboratories and offices, with hopes that our findings will generalize to real people in real situations in a way that laypersons can understand. Stated more technically, the predictive and ecological validity of full-battery, full-scale IQ test scores cannot be beat.

IQ and Neuropsychological Testing are More Similar Than Not

We have leaders in the field of neuropsychology (Lezak et al., 2004) condemning the use of IQ tests because the scores are not pure enough measures of the underlying neurobiology, but this rigid perspective implies that neuropsychologists are somehow above using tests with adulterated content validity. In other words, if the test does not have direct neurobiological underpinnings, then it is of no use to the neuropsychologist.

While we see and appreciate the distinction between methods that more directly assay neurobiological machinery (e.g., finger-tapping tests, occipitofrontal circumference, 2D:4D ratios, resting heart rate, list-learning tests, neurological soft-sign batteries) and those that do not (viz., a full-battery IQ score), we also believe it a mistake to interpret this distinction as meaning that neuropsychologists should not be administering IQ tests. Why? Because IQ and neuropsychological tests have much in common with each other, which becomes increasingly evident when we pan out and see our work from a distance.

To illustrate these commonalities, consider basic psychometrics, which encapsulates concepts like the standard deviation, the aforementioned SEM, all the different flavors of reliability and validity, and so forth. When appraising the value of an IQ test, we use psychometric properties. Well, the same is true of neuropsychological tests. Stated plainly, an interchangeable set of scientific concepts—known as psychometrics—are applied to understand, if not define, the value of both IQ and neuropsychological tests, which makes IQ and neuropsychological tests at least close cousins, if not full siblings.

Perhaps not surprisingly, the psychometric properties of IQ and many neuropsychological tests do not diverge. For example, a quick glance at the manuals for the WAIS-IV (2008), the Wechsler Intelligence Scale for Children-5th Edition (WISC-V; 2014), and the Stanford-Binet Intelligence Test-5th Edition (SB-5, 2003) makes it evident that these full-battery IQ tests have reliability estimates in the same general ballpark (usually at or above .80). Referencing the manuals for the Dean-Woodcock Sensory Motor Battery (Dean & Woodcock, 2003), the Quick Neurological Screening Test (Mutti, Spalding, & Sterling, 2012), the NEPSY-II, and the NAB, we also encounter reliability estimates in the high (.80-.89) to very high (.90+) range. Put differently, not only are IQ and neuropsychological tests characterized by their psychometric features, but the actual values are much more similar than not.

Why might this be? It turns out that some of the exact same tests used in a full-battery IQ test are also used in neuropsychological batteries. For example, the Wechsler Memory Scale-4th Edition (WMS-IV) includes a test called Symbol Span, which is nearly identical to the Picture Span subtest in the WISC-V. Similarly, the Spatial Span subtest from the third version of the WMS was also available in the fourth edition of the WISC and is still found in the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery and SB-5. Likewise, digit span subtests reside within the Wechsler full-battery IQ tests, but also in the NAB Attention Module and the Test of Memory and Learning (Reynolds & Voress, 2007). Other examples of overlap between IQ and neuropsychological tests are easy to find, the take-home message being that some of why IQ and neuropsychological tests agree is because identical tests are being used. The only thing distinguishing the tests is the name attached to it.

Neuropsychological and IQ tests are also similar insofar as they are both vulnerable to the same threats to score validity. By this we are talking about anything from examinee dissimulation (e.g., malingering), to the Flynn Effect (i.e., that outdated test norms inflate test scores), to sensory deficits (e.g., poor vision or hearing), to inadequate effort output from the examinee, to medication effects (e.g., scoring better because tested while taking a stimulant), to response expectancy (e.g., scoring worse because of inaccurate beliefs about illness or injury), and the list continues on. In short, how people score on neuropsychological tests is influenced by many of the same factors that bias IQ test scores.

 


 

Professional Blind Spots

To the extent that neuropsychologists exclusively view troubling human behavior (e.g., forgetfulness, truncal ataxia, inattention, apraxia, cognitive inflexibility, dysarthria, impulsive aggression) through an overly conservative, purist neuropsychological lens, we are likely to alienate ourselves from laypersons, including the members of society (viz., judges, educators, parents, legislators) who are most able to apply our valuable perspective to real-world problems in practical, helpful ways.

This gets us to what Pinker (2014) calls “the curse of knowledge”, defined as “difficulty in imagining what it is like for someone else not to know something that we know” (p. 59). Pinker continues in his discussion of the curse of knowledge:

The inability to set aside something that you know but that someone else does not know is such a pervasive affliction of the human mind that psychologists keep discovering related versions of it and giving it new names. There is egocentrism… There’s hindsight bias… There’s false consensus… There’s illusory transparency… [There’s] a lack of theory of mind… (pp. 59-60)

Related to the curse of knowledge is what Pinker calls “professional narcissism,” which occurs when experts “…lose sight of whom they are writing for, and narcissistically describe the obsessions of their guild rather than what the audience really wants to know” (p. 40). Are there important differences between IQ and neuropsychological testing? Of course there are. But when we zoom out and view our work from afar, those differences are eclipsed by the similarities, making the message of this paper and that of the neuropsychological purist much more similar than not. Arguably, we and the neuropsychological purist are dogs from the same kennel.

If neuropsychology narcissistically expends all of its enthusiasm on nuances with limited practical implications, life might pass the field by, meaning that neuropsychology ends up excluded from the legislative and policy-making tête-à-têtes concerning the proper diagnosis of ID. And, probably of much more importance, society does not benefit from the perspective of neuropsychology as it pertains to the diagnosis of ID.

To exemplify, consider the role often played by the educational diagnostician in the diagnosis of ID within the public schools. Do they include tests of performance validity? No. Do they correct for the Flynn Effect? No. Do they control for practice effects? No. Do they use neuropsychological tools to measure constructs at the core of the ID diagnosis? No. Do they often fail to correctly identify students with ID?

The answer to this last question is perhaps best appreciated with respect to a handful of well-known legal cases involving capital murder defendants: Green v. Johnson (1998); Atkins v. Virginia (2002); and Walker v. True (2007). In short, none of the defendants in these cases were identified by the public schools as having ID (then called MR), and yet they were all later found to handily satisfy the diagnostic criteria for MR/ID when put under the microscope by forensic experts, in a context where the evaluating expert’s methods were being scrutinized by attorneys, judges, the community, the press, and others. Put differently, when diagnosing ID became of crucial importance—literally a matter of life or death—the forensic experts in these influential legal cases did not agree with the educational diagnosticians on the ID diagnosis. Why might this be?

According to Gresham (2009), it is not unusual for students with ID to not receive the ID label by the public schools, “particularly for African Americans for whom the issue of overrepresentation in special education programs for the mentally retarded has been an issue since the 1970s” (p. 95). This social/political pressure to not find intellectual deficits in students is further understood with reference to the history, in the United States, of intellectual disabilities:

Shortly after the ‘declassification’ of persons with borderline intellectual disability, a large swell in those identified with a specific learning disability began to appear…between 1976 and 1992, the number of children identified with learning disabilities in the schools increased by 198%. At the same time, there was a 41% decrease in diagnoses of intellectual disabilities. Since then, the ranks of children and adults with learning disabilities have continued to grow. Today, learning disabilities have become the highest incidence designation for special education, with more than 3 million students nationally. (Ferrari, 2009, pp. 386-387)

Patterns like this raise important questions about whether or not educational diagnosticians should be diagnosing ID in the first place. Because of the limited training and skillset—not to mention the social/political pressure within public schools to not find problems—one has to question whether or not it was a good idea to bestow this diagnostic power to educational diagnosticians. As long as there is political pressure for educational diagnosticians to not identify too many students with costly neurodevelopmental disorders, the chances of many educational diagnosticians getting things right is going to remain slim, if not anorexic.

Psychiatry is another field interested in the diagnosis of ID, as demonstrated by their publication of the DSM-5. However, there is also reason to be very guarded about the physician’s capacity to accurately identify those with ID. Probably more so than is the case with the educational diagnostician, since the latter group has at least been trained to administer IQ tests. Can physicians get close to an accurate ID diagnosis, using only their clinical interview skills? Maybe. But what if they end up on the wrong side of close?

To illustrate the concerns about psychiatrists and other physicians diagnosing ID, consider the early work of Binet and Simon (1907), who set out to invent a diagnostic tool (the IQ test) that would correctly classify children lacking in academic potential. In conducting their research, these IQ-test pioneers discovered that physicians could not be relied upon to correctly classify these children:

“We have made a methodological comparison between the admission certificates filled out…by the doctors…[and] We have compared several hundreds of these certificates, and we think we may say without exaggeration that they looked as if they had been drawn by chance out of a sack” (pp. 76).

Assembling all the pieces of this puzzle, it is hard to ignore the reality that neuropsychologists are optimally prepared to diagnose ID. More than educational diagnosticians, psychiatrists, other physicians, and so on. Not appreciating this truth might be the professional blind spot for these other fields. For the field of neuropsychology, the blind spot might be its inability to recognize and embrace its important role in the diagnosis of ID.

Conclusions

Because neuropsychological methods are ideal for characterizing many intellectual and adaptive functioning deficits, it has become nearly impossible to ignore the neuropsychological elephant in the living room when it comes time to explain appreciable deficits in intellectual wherewithal. So while IQ tests were not initially developed to measure dysfunction of the central nervous system (Sbordone, 2008), neuropsychological undertones—if not overtones—are increasingly evident in newer revisions of full-battery intelligence tests (Raiford & Holdnack, 2014). Meanwhile, there is no shortage of neuropsychological tools for better understanding examinees’ intellectual deficits and real-world struggles. We have social cognition tests; tests of the cognitive processes driving academic skill development; tests of independent living; tests of adaptive functioning; tests of cognitive flexibility; tests of attention; memory tests; tests to identify neurological soft-signs; tests of academic skill; tests of performance validity; and the list goes on.

Unfortunately, however, many engaged in the diagnosis of ID do not properly use—or even know about—these neuropsychological methods. Meanwhile, neuropsychologists—the very group of mental health professionals best trained to apply these methods toward a diagnosis of ID—often shun the use of IQ tests, effectively paving the way for others (e.g., educational diagnosticians, garden-variety psychologists, psychiatrists) to influence how and when this powerful diagnosis is made. Or not made. It is our position that neuropsychologists, more than any other group of mental health professionals, are optimally prepared to diagnose ID and that the field of neuropsychology—rather than rearranging furniture on the Titanic by overemphasizing professional nuances with limited practical relevance—should embrace this important responsibility so that the public may benefit.

 

 

 

 

 

 

 

 

 

 

 

 

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders

(5th ed.). Arlington, VA: American Psychiatric Publishing.

Atkins v. Virginia, 536 U.S. 304, 122 S. Ct. 2242, 153 L. Ed. 2d 335 (2002).

Axelrod, B.N. (2002). Validity of the Wechsler abbreviated scale of intelligence and other very short

forms of estimating intellectual functioning. Assessment, 9(1), 17-23.

Binet, A., & Simon, T. (1907). Les enfants anormauz. Paris: Armond Colin.

Brown, L., Sherbenou, R. J., & Johnsen, S. K. (2010). TONI-4, Test of Nonverbal Intelligence. Austin, TX: Pro-Ed.

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological bulletin, 56(2), 81-105.

Coalson, D. L. & Raiford, S. E. (2008). Wechsler adult intelligence scale-fourth edition (WAIS-IV): Technical and interpretive manual. San Antonio, TX: Pearson.

Cullum, C., Munro, M., Weiner, F., & Saine, F. (2009) Texas functional living scale. PsychCorp.

Dahlstrom, W. G. (1993). Tests: Small samples, large consequences. American Psychologist, 48(4), 393-399.

Dean, R. S. (2003). Dean-Woodcock Sensory Motor Battery. Itasca, IL: Riverside Publishing.

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21.

Deary, I. J., Penke, L., & Johnson, W. (2010). The neuroscience of human intelligence differences. Nature Reviews Neuroscience, 11(3), 201-211.

Ferrari, M. (2009). Borderline intellectual functioning and the intellectual disability construct. Journal Information, 47(5), 386-389.

Gartstein, M. A., Bridgett, D. J., Dishion, T. J., & Kaufman, N. K. (2009). Depressed mood and maternal report of child behavior problems: Another look at the depression–distortion hypothesis. Journal of Applied Developmental Psychology, 30(2), 149-160.

Green v. Johnson, 160 F.3d 1029 (5th Cir. 1998).

Gresham, F. M. (2009). Interpretation of intelligence test scores in Atkins cases: Conceptual and psychometric issues. Applied neuropsychology, 16(2), 91-97.

Holdnack, J. A., & Drozdick, L. W. (2009). Advanced clinical solutions for WAIS-IV and WMS-IV: Clinical and interpretive manual. Texas: Pearson.

Hunt, E., & Madhyastha, T. M. (2012). Cognitive demands of the workplace. Journal of Neuroscience, Psychology, and Economics, 5(1), 18-37.

Johnson, W., Deary, I. J., & Iacono, W. G. (2009). Genetic and environmental transactions underlying educational attainment. Intelligence, 37(5), 466-478.

Johnson, W., Corley, J., Starr, J. M., & Deary, I. J. (2011). Psychological and physical health at age 70 in the Lothian Birth Cohort 1936: links with early life IQ, SES, and current cognitive function and neighborhood environment. Health Psychology, 30(1), 1-11.

Kaufman, S. B., Reynolds, M. R., Liu, X., Kaufman, A. S., & McGrew, K. S. (2012). Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests. Intelligence, 40(2), 123-138.

Korkman, M., Kirk, U., & Kemp, S. (2007). NEPSY-II: Clinical and interpretive manual. San Antonio, TX: The Psychological Corporation.

Kuncel, N. R., Ones, D. S., & Sackett, P. R. (2010). Individual differences as predictors of work, educational, and broad life outcomes. Personality and Individual Differences, 49, 331-336.

Lezak, M.D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press.

Lezak, M. D., Howieson, D. B., Loring, D. W., Hannay, H. J., & Fischer, J. S. (2004). Neuropsychological assessment (4th ed.). New York, NY: Oxford University Press.

Linkenhoker, D., & McCarron, L. (1993). Adaptive behavior: The street survival skills questionnaire. Dallas, TX: McCarron-Dial Systems.

Loeb, P. A. (2003). Independent living scales. Psychological Assessment Resources, Incorporated.

Luborsky, L., Auerbach, A. H., Chandler, M., Cohen, J., & Bachrach, H. (1971). Factors influencing the outcome of psychotherapy: A review of quantitative research. Psychological Bulletin, 75, 145-185.

McCallum, R. S. (2003). The universal nonverbal intelligence test. In Handbook of nonverbal assessment (pp. 87-111). New York: Springer US.

Mutti, M. C., Spalding, N. V., & Sterling, H. M. (2012). QNST-3: Quick Neurological Screening Test. Academic therapy publications.

Nelson, J. M., Canivez, G. L., & Watkins, M. W. (2013). Structural and incremental validity of the Wechsler Adult Intelligence Scale–Fourth Edition with a clinical sample. Psychological assessment, 25(2), 618-630.

Ortiz, S.O. (2009). Separating cultural and linguistic differences (CLD) from specific learning disability (SLD) in the evaluation of diverse students. In. Flanagan, D. P., & Alfonso, V. C. (Eds.), Essentials of specific learning disability identification (pp. 299-325), Hoboken, NJ: Wiley.

Pearson, N. (2003). Comprehensive Test of Nonverbal Intelligence. In Handbook of nonverbal assessment (pp. 141-161). New York: Springer US.

Pinker, S. (2014). The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century. New York, NY: Penguin.

Raiford, S.E. & Holdnack, J.A. (2014). Wechsler intelligence scale for children-fifth edition (WISC-V): Technical and interpretive manual. Bloomington, MN: Pearson.

Reynolds, C. R., Castillo, C. L., & Horton Jr., A. M. (2008). Neuropsychology and intelligence: An overview. In Horton, A. M. & Wedding, D. (Eds.), The Neuropsychology Handbook (3rd ed.; pp. 69-86). New York: Springer.

Roid, G. H., Miller, L. J., & Koch, C. (2013). Leiter international performance scale. Stoelting.

Sbordone, R. J. (2008). Ecological validity of neuropsychological testing: critical issues. In Horton, A. M. & Wedding, D. (Eds.), The Neuropsychology Handbook (3rd ed.; pp. 367-394). New York: Springer.

Schmidt, F. L. (2014). A general theoretical integrative model of individual differences in interests, abilities, personality traits, and academic and occupational achievement a commentary on four recent articles. Perspectives on Psychological Science, 9, 211-218.

Strauss, E., Sherman, E. M., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary. New York, NY: Oxford University Press.

Travers, M. W. (1977). Essentials of Learning. (4th ed.). New York, NY: MacMillan.

Reynolds, C. R., & Voress, J. (2007). Test of memory and learning (TOMAL-2). Austin, TX: Pro-

Ed.

Walker v. True (2006). 399, F. 3d, 327 (4th Cir. 2005).

Wechsler, D. (1944). The measurement of adult intelligence. Baltimore, MD: Williams & Wilkins.

Wechsler, D., Coalson, D. L., & Raiford, S. E. (2008). Wechsler adult intelligence scale-fourth edition

(WAIS-IV). San Antonio, TX: Pearson.

Wechsler, D., Raiford, S.E. & Holdnack, J.A. (2014). WISC-V Manual. Wechsler Intelligence Scale

for Children-V Manual. San Antonio, TX: Pearson.

White, T., & Stern, R. A. (2003). NAB, neuropsychological assessment battery: psychometric and technical manual. Lutz, FL: Psychological Assessment Resources.

Wrulich, M., Brunner, M., Stadler, G., Schalke, D., Keller, U., & Martin, R. (2014). Forty years on: Childhood intelligence predicts health in middle adulthood. Health Psychology, 33(3), 292-296.

 

 

Archives