Differences among subgroups in concurrent and predictive validity of timed oral reading fluency measures and informal reading inventories on performance on computer adaptive assessments of reading
|Institution:||Northern Illinois University|
|Keywords:||Educational psychology; Reading instruction|
|Full text PDF:||http://pqdtopen.proquest.com/#viewpdf?dispub=10123576|
Recent legislation such as No Child Left Behind and the Performance Evaluation Reform Act (PERA) increasingly pressure teachers and schools to be accountable for instructional time in the form of improved test scores. As a result, students are given an increasing variety of assessments in a given school year in an attempt to measure academic growth over time. These assessments frequently occur in the form of curriculum-based measures (CBM) of oral reading fluency, informal reading inventories (IRIs), computer adaptive tests (CATs) of reading, and annual high stakes assessments of reading ability. While a review of the literature reveals a well-established relationship between CBMs, IRIs, and high stakes assessments, less empirical evidence exists that illustrates the relationship between and technical adequacy of CBMs, IRIs, and CATs of reading ability. Specifically, the current study examined the concurrent and predictive validity of CBMs and IRIs on computer adaptive assessments in students in second through fifth grade. Further, given the increasing diversity of student populations, the current study sought to examine the extent to which one's status as an English Learner or student with an Individualized Education Plan (IEP) moderates the relationships between oral reading fluency and CATs and IRIs and CATs.