Email Status
 

Volume 26, Number 6
November/December 2010

The Promise of New State Tests

Two consortia plan better tests, but will they lead to better instruction?

 

All but six states have signed on to plans for big changes in student tests.

Beginning in 2014, students in nearly every state will take assessments on computers that will measure their ability to answer complex problems in reading and mathematics. The results will indicate whether they are on track for college and career readiness, and will be compared across state lines. And teachers will have access to a wide range of tools to help them prepare students to meet challenging standards.

That is the vision of two consortia of states that last summer won a total of $330 million from the U.S. Department of Education to begin developing new assessment systems to replace existing state tests. At this point, 44 states (and the District of Columbia) have signed on to one or both of the consortia; they must make a final decision to choose one or neither by the time the assessments are pilot-tested in the 2013-14 school year.

However, in order to realize their promise, the two consortia must work through some difficult technical and educational issues. For instance, they must determine how to design a sequence of assessment components to be administered throughout the course of the year as well as how to implement automated scoring of assessments, says Scott Marion, the associate director of the National Center for the Improvement of Educational Assessment.

“There is the promise of things to be different; they are not different yet,” he says. “If people don’t think [the new features] are ready for prime time in four years, that will limit what states will be willing to do. States will revert back to what they know. The assessment system will look like a typical assessment system, except administered by computer.”

Nevertheless, he and others remain at least cautiously optimistic that the problems can be worked out and that the new assessments will encourage educators to teach higher levels of knowledge and skills. “This is a moment of great opportunity for the country and for children,” says Joan L. Herman, the director of the National Center for Research on Evaluation, Standards, and Student Testing at the University of California, Los Angeles.

The grants for the assessment systems were part of the federal Race to the Top program. The Department of Education launched the competition in April, calling for consortia of states to develop assessments that would measure the Common Core State Standards in English language arts and mathematics, which some 40 states and the District Columbia have adopted. Two consortia submitted bids, and both won awards. The Smarter, Balanced Assessment Consortium, (SBAC) led by Washington State, consists of 30 states; the Partnership for the Assessment of Readiness for College and Careers (PARCC), led by Florida, consists of 26 states. (The total adds up to more than 50 because at this point states can join either consortia without committing to administer either test. Several states, including Alaska, Texas, and Virginia are part of neither.)

Although there are some differences between the two consortia’s plans, they share many features in common that represent significant departures from current practice. For example, both groups plan to administer assessments primarily on computers, and both would make extensive use of open-ended items, rather than rely almost exclusively on multiple-choice questions, as many current tests do. In addition, both groups plan to develop materials for teachers, such as curriculum maps, to show how the material on the assessments can be taught over the course of the year, and items that can be used formatively in classrooms.

One of the most innovative ideas found in both plans is the proposal to include some tasks in the assessments that would be administered during the school year, in addition to an end-of-course assessment. The PARCC proposal calls for three interim tasks, given at three-month intervals, that are intended to measure topics closer to when students actually study them and provide feedback to students and teachers during the year. The SBAC plan calls for a single extended project to be administered near the end of the year with optional interim assessments.

By administering tasks during the year, the consortia could engage students in higher-level problem solving, such as writing extended research papers or conducting detailed science experiments, says Herman. “This gets us beyond thinking of assessment as a single annual test,” she says.
But developing such measures poses substantial challenges, not the least of which is what to include on the interim tasks, notes Marion. The consortia’s assessment developers, who include representatives of the participating states, must decide whether these mid-year tasks will measure some of the standards now included on end-of-year tests or whether they will measure mid-term mastery of the standards that will be assessed again at the end of the year, he says.

“If a particular standard [in mathematics, for example] is assessed by a performance task in October, do you go back and assess it in February or May?” Marion asks. “Or do you say, ‘I’ve assessed that standard, and now I’ll focus on other standards.’ That implies that you don’t think kids will develop their performance on that standard from October to May. That seems silly. Why are kids in school?”

Another dramatic change from current practice is the fact that the assessments will be common across states. Each of the states within both the PARCC and the SBAC consortia will administer the same assessments and have agreed to work toward producing results that can be compared across consortia.

That means that the expectations for student performance will be common across states, something that is not the case today, says Herman. “You can move from one state to the next and have consistent standards,” she says. “That levels the playing field.”

The results can also drive improvements by showing how schools and districts perform relative to other schools and districts using a common metric, suggests Edward Roeber, a professor of education at Michigan State University and a former state testing director. “Normative data can be useful,” he says. “Now, high-performing districts can sit on their laurels. [But] If you compare them to other high-performing districts in other states, they may not look so good.”

However, Roeber cautions that the potential power of the new assessments could be squandered unless teachers have the knowledge and skills to use assessment data effectively and teach in ways that will lead to higher performance. Although the consortia plan to develop instructional tools, teachers will also need considerable professional development to learn how to teach higher-level skills and knowledge—something few teachers currently teach, he says.

Without this kind of support, he says, “We’ll come up with all sorts of comparative data, none of which will help kids learn. We’ll have a niftier, more expensive state testing program.”

Robert Rothman is a senior fellow at the Alliance for Excellent Education and a frequent contributor to the Harvard Education Letter.