Teaching

See the Error, See the Individual — and See the Intervention Teaching
One size does not fit all in dealing with educational assessment.

See the Error, See the Individual — and See the Intervention

January 30, 2017 3644

One size does not fit all in dealing with educational assessment.

One size does not fit all in dealing with educational assessment.

jpa-coverThe following article synthesizes scholarship in the latest issue of the Journal of Psychological Assessment. The special issue was guest edited by Kristina C. Breaux of Pearson Clinical Assessment; Melissa A. Bray and Melissa M. Root, both at the University of Connecticut; and Alan S. Kaufman of Yale University.

***

Too often an assessment of academic achievement focuses only on standard scores and eligibility determinations; however, a standard score does not provide information about level of mastery for specific skills nor does it reflect the specific processes that mediated performance at the item level. Students with learning disabilities and those who do not benefit as expected from evidence-based instruction need comprehensive evaluations that provide data to inform a change in instruction that will better meet each student’s learning needs. Drilling down to the item level via error analysis is critical to this process. Unfortunately, no research has been conducted to date on the error analysis data obtained from diagnostic achievement tests.

The primary focus of this special issue of Journal of Psychoeducational Assessment is to understand what students’ errors on standardized tests of academic achievement tell us about teaching and learning, and to use this knowledge to inform the assessment process and development of educational interventions. The goal is to provide a first step in remedying that gap in the literature, based on data obtained during the standardization and validation of the Kaufman Test of Educational Achievement—Third Edition, known as KTEA-3.

There has been a movement in school psychology and special education to focus on Response to Intervention, or RTI, which usually means eliminating cognitive tests from the diagnostic process.  Many who favor RTI methodology disdain identifying students’ cognitive strengths and weaknesses, believing that an individual’s particular pattern has little to do with selecting appropriate interventions.  The contributors to this special issue believe the opposite—that the understanding of a person’s strong and weak areas of functioning will help psychologists and educators identify the best interventions for each person and not rely on the notion that one size fits all. In this special issue, the authors of the diverse empirical studies focus on patterns of errors on tests of academic achievement to help clarify the link to intervention.

The exploratory factor analysis of errors conducted by Rebecca O’Brien, et al, revealed the error factor structure upon which most other articles in this special issue were based. The resulting factor structure has implications for understanding the relationships between students’ errors in word recognition, decoding, spelling, and math. For example, the composition of error factors differed for the basic reading and spelling subtests — even though they shared most of the same error categories. Hence, students with a skill weakness in a particular area may make different patterns of errors in word reading, pseudoword decoding, and spelling.

Similarly, the article by Ryan C. Hatcher and colleagues revealed a trend of dissimilar errors across parallel subtests (listening-reading and speaking-writing), suggesting that errors in one language system may not necessarily translate into more errors on a similar task that originates in a different modality. These results highlight the separability of the listening, speaking, reading and writing language systems. Errors must be analyzed within and across parallel language systems to determine the extent of a student’s skill weaknesses. In addition, the instructional approach must respond to each language system’s unique developmental and skill trajectory.

Some of the articles in this issue examined whether students with specific cognitive profiles associated with learning problems differ in the kinds of errors they make on tests of reading, writing, math, spelling, and oral language. For example, Taylor Koriakin, Erica White, and colleagues found that students with high crystallized ability (knowledge) but low processing speed and memory were stronger on basic and complex math problem solving than students with the opposite cognitive patterms. Additionally, Xiaochen Liu and colleagues found that the high knowledge—low speed/memory group outperformed other cognitive profile groups on almost all error factors in the areas of phonological processing, word reading, decoding, reading comprehension, and written expression. These findings reinforce the notion that cognitive patterns of strengths and weaknesses have clear-cut implications for targeted intervention in specific skill areas.

A subset of articles in this issue investigated whether special populations differ in their patterns of errors on achievement tests. Matthew S. Pagirsky and colleagues, for example,  found that children with ADHD and comorbid reading difficulties demonstrated greater errors across reading, math, and writing subtests than those with ADHD alone or a control group, especially on error factors requiring phoneme awareness, language skills, and inhibition. In addition, Maria Avitia, Emily DeBiase and colleagues found more similarities than differences between the error patterns of students with reading versus math disorders. With a focus on students with giftedness and learning disabilities, Karen L. Ottone-Cross and colleagues found that gifted students with learning disabilities (gifted LD) scored similarly to the gifted group on many higher-level skills, whereas they scored similarly to students with learning disabilities (SLD) on several lower-level skills. For example, the gifted LD sample demonstrated similar strengths to the GT sample across the Written Expression and complex Math Concepts & Applications skills, which have a higher level processing demand. However, the gifted LD sample was statistically different from the SLD group on all subtests and factors – with the exception of two tasks with lower level processing demands: decoding nonsense words with silent letters and computing addition math problems.

All of the articles with clinical groups demonstrate the importance of looking beyond diagnostic categories when planning instruction, focusing instead on the types of errors students make and their specific instruction needs.A subset of articles in this issue investigated categories when planning instruction, focusing instead on the types of errors students make and their specific instruction needs.

The research described within this special issue is the first of its kind to look within individual profiles and consider the kinds of errors students make within and across academic domains to reveal valuable insights for improving assessment and instruction. These articles stem from a belief that assessment must lead to instruction, and error analysis is at the heart of understanding how students learn and need to be taught.

The key feature of the special issue tying all the separate articles together are the invited commentaries. We asked luminaries in the field to integrate the array of articles and to apply the results of the studies to diagnosis and treatment. They did not disappoint. These leaders in school psychology, neuropsychology, and special education helped to integrate the diverse studies and make their findings come alive. Nancy Mather and Barbara Wendling explain how a careful analysis of errors is key to planning the most appropriate instructional interventions. They noted that, “A careful error analysis goes beyond the quantitative results, and focuses upon the qualitative information that may be derived from reviewing an individual’s performance on the specific items of the various subtests.”

Reflecting on the past, present, and future applications of error analysis, George McCloskey explains how to use process error analysis to shed light on the most frequently overlooked mental capacity—executive functions. Alyssa Montgomery, Ron Dumont, and John Willis reinforce the importance of examiners looking beyond standard scores when analyzing results and consider the evidential validity as well as the predictive and practical utility of the research findings. Dawn Flanagan, Jennifer Mascolo, and Vincent Alfonso explain the utility of KTEA-3 error analysis for the diagnosis of specific learning disabilities based, in part, on a consideration of the findings from various studies in this special issue. To demonstrate that process, Flanagan, Mascolo, and Alfonso discuss error analysis within the context of a pattern of strengths and weaknesses approach and provide an illustrative case example.


Kristina C. Breaux is a senior research director with Pearson Clinical Assessment, where she led the development of the WIAT®-III and worked in close collaboration with Alan and Nadeen Kaufman to develop the KTEA™-3 and KTEA™-3 Brief. She is coauthor of Essentials of WIAT®-III and KTEA™-II Assessment.

View all posts by Kristina C. Breaux

Related Articles

Digital Scholarly Records are Facing New Risks
Research
May 21, 2024

Digital Scholarly Records are Facing New Risks

Read Now
Analyzing the Impact: Social Media and Mental Health 
Research
May 15, 2024

Analyzing the Impact: Social Media and Mental Health 

Read Now
New Fellowship for Community-Led Development Research of Latin America and the Caribbean Now Open
Academic Funding
May 14, 2024

New Fellowship for Community-Led Development Research of Latin America and the Caribbean Now Open

Read Now
New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open
Featured
April 22, 2024

New Opportunity to Support Government Evaluation of Public Participation and Community Engagement Now Open

Read Now
Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

Three Decades of Rural Health Research and a Bumper Crop of Insights from South Africa

A longitudinal research project project covering 31 villages in rural South Africa has led to groundbreaking research in many fields, including genomics, HIV/Aids, cardiovascular conditions and stroke, cognition and aging.

Read Now
Using Translational Research as a Model for Long-Term Impact

Using Translational Research as a Model for Long-Term Impact

Drawing on the findings of a workshop on making translational research design principles the norm for European research, Gabi Lombardo, Jonathan Deer, Anne-Charlotte Fauvel, Vicky Gardner and Lan Murdock discuss the characteristics of translational research, ways of supporting cross disciplinary collaboration, and the challenges and opportunities of adopting translational principles in the social sciences and humanities.

Read Now
Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects

Coping with Institutional Complexity and Voids: An Organization Design Perspective for Transnational Interorganizational Projects

Institutional complexity occurs when the structures, interests, and activities of separate but collaborating organizations—often across national and cultural boundaries—are not well aligned. Institutional voids in this context are gaps in function or capability, including skills gaps, lack of an effective regulatory regime, and weak contract-enforcing mechanisms.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments