Email Status
 

Volume 27, Number 2
March/April 2011

Seven Misconceptions About Value-Added Measures

 

In August 2010, the Los Angeles Times took the controversial—and unprecedented—step of calculating value-added scores for thousands of Los Angeles Unified School District teachers and publishing individual teachers’ scores along with their names. More recently, the United Federation of Teachers has sued (so far unsuccessfully) to prevent the New York City Department of Education from releasing data that would allow media to publish the same information on New York City teachers. These efforts to publicize individual value-added scores illustrate exactly why educators distrust the ability of policy makers to design appropriate accountability policies and of the media to accurately portray school performance.

Professional athletes, realtors, and a handful of other professions make individual performance measures publicly available. But for teachers, as for people in most kinds of jobs, disclosing this information doesn’t make much sense. Why not?

Most obviously, these measures only capture teachers’ contributions to students’ standardized test scores, which are at best loosely related to other important student outcomes, like creativity and engagement. Also, there are considerable errors in value-added measures. Finally, value-added measures grade teachers on a bell curve, so no matter how good the entire pool of teachers is, half—by definition—will always be below average. Thus, half the parents looking at the scores of their children’s teachers are bound to be disappointed. Publicizing this information will do more to wreak havoc and raise complaints than to help students.

Confusion and Mistrust
Over the past several years, I have been invited to give dozens of presentations on value-added measures, including by educators in Los Angeles in the immediate aftermath of the unfortunate Los Angeles Times website release. In listening to educators, I have discovered several persistent misconceptions about value-added measures in education, some of which tend to be raised by opponents of value-added and others by supporters. Almost all of them have an element of truth but are also tinged with misunderstanding. As the temperature has risen in the debate over value-added and accountability, so has the confusion and mistrust. The need for reasoned discussion and productive solutions requires that we begin by clarifying some of these misconceptions.

Misconception 1: We cannot evaluate educators based on value-added because teaching is complicated. Teaching is certainly complicated, but this is as much an argument in favor of value-added as it is against. There are many ways to be a good teacher. Measures like value-added that focus on student outcomes—as long as they are used in well-designed accountability systems—can allow each teacher to be effective in reaching accountability goals in his or her own way.

Misconception 2: Value-added scores are inaccurate because they are based on poorly designed tests. Most standardized tests are indeed flawed, but this is not a problem created or worsened by value-added. What is needed are more sophisticated tests that capture richer content, such as those used in the International Baccalaureate program. With their open-ended, constructed-response questions, these types of tests may make some kinds of statistical calculations more difficult, but since what gets tested gets taught, this is a sacrifice easily worth making. We can still use value-added methods with these richer assessments.

Misconception 3: The value-added approach is not fair to students. Despite the arbitrary nature of proficiency standards, some still see them as an important bar that all students should reach. Viewed this way, value-added might seem unfair to low-performing students because it shifts the focus of the accountability system away from proficiency.

While I am concerned about meeting the needs of the lowest-performing students, it is not clear that current law really helps these learners. Proficiency-focused systems place their attention on students nearest the standard, the “bubble kids.” In states where the bar is set very low, the bubble kids are also arguably the lowest performers, but this is not the case in states with higher proficiency bars. In those states, students who are the farthest behind may be the least likely to get attention.

Value-added could maintain, or actually enhance, the attention given to achievement of low-performing students. For instance, value-added measures could be weighted so that achievement of the lowest performers counts more than that of other students. In addition, performance measures could combine snapshots of student outcomes as well as assessments of student growth.

Misconception 4: Value-added measures are not useful because they are summative rather than formative. Value-added measures provide summative assessments of teacher performance—they indicate whether teachers are high performers in terms of one important student outcome. But value-added is often criticized for not providing information about how educators can improve. This is a legitimate point, but no single measure can fulfill both formative and summative functions very well. For this reason, any use of value-added, especially for individual teachers, should be coupled with observational information from school principals or peer assessors that includes specific information about areas of weakness.

Formative and summative evaluations are also complementary. Having a formative evaluation with no summative evaluation means there is a path to improvement but no incentive to follow that path. A summative evaluation without a formative one provides an incentive but no path. We need both.
Misconception 5: Value-added represents another step in the process of “industrializing” education, making it more traditional and less progressive.

It is easy to see how someone might get this impression. The term comes from industrial manufacturing, where “value-added” is understood to be the difference between the value of inputs and the value of outputs.

But to conclude that value-added industrializes schooling is misleading. First, the hallmark of industrialization is not so much the standardization of results but the standardization of processes. In education, however, if policy makers concentrate on results, they can reduce the rules that constrain educational practice. In this sense, expanded use of value-added could actually reduce the industrialization of education.

Whether education is traditional or progressive depends more on the design of standards and assessments than on the use of value-added over snapshot measures. Progressive education is closely related to the idea of inquiry-based learning, and there is nothing to prevent tests of those types of learning from being used to create value-added measures.

Misconception 6: Because we know so little about the effects of value-added, we cannot risk our kids’ futures by experimenting with it. No one wants to put children at risk. Some people are more willing to take chances than others, depending on how many problems they see with the existing school system. In a crisis, the odds of making things better are high, lessening risk. I think the evidence is fairly persuasive that while our performance and accountability system might not be in crisis, it is at least seriously flawed.

The larger problem with this risk argument is that, if taken to its logical conclusion, it would prevent all changes whatsoever. If we cannot experiment, then we cannot discover productive new approaches.

Misconception 7: Value-added is a magic bullet that will transform education all by itself. While the enthusiasm for value-added is understandable and to some degree well-founded, there is no direct evidence that these measures alone will improve teaching and learning. As the history of No Child Left Behind shows, school-level, test-based accountability can certainly change teaching, but not always for the better and not necessarily in ways that positively influence learning. Also, past efforts to hold individual teachers accountable, especially through merit pay, have been short-lived.

There are also reasons to be skeptical of value-added measures—the lack of confidence created by random error and problems with the design of student tests, to name a few. A failure to recognize these limitations could easily lead to what I call the “air bag problem”—the tendency for innovation to lead to overconfidence, just as the invention of air bags may have encouraged drivers to go faster than is safe. The Los Angeles Times’s publication of individual teachers’ value-added scores was clearly a case of driving the measures too fast. This is hardly a policy that will lead to improvement.

But the magic bullet argument is also a bit of a straw man. Nothing is a magic bullet. The real question is, can value-added approaches improve education? I think the answer is yes, as long as the policies are well designed and carefully implemented.

A Productive Middle Ground
I believe we can find a more productive middle ground, one that uses value-added measures as one part of a system of performance measures and accountability that improves not only test scores but teaching and learning.

Let me explicitly acknowledge two of my opinions, both of which I believe are widely supported and backed up by evidence. First, I think we can improve the way we hold educators accountable. Moreover, I think it is possible to do so in a way that most key stakeholders—including educators—would approve of. There is simply a great deal of self-inflicted mistrust on all sides of today’s education and accountability debates that has made improvement difficult to accomplish.

Second, while there has been little careful experimentation to date, I believe that value-added measures of performance can ultimately improve teaching and learning—not just increase test scores but improve the practice of teaching and encourage the genuine learning that the vast majority of parents, students, policy makers, and educators want to see. This does not mean I advocate, say, using teacher value-added measures for merit pay. In reality, I think the jury is still out on that particular idea. But accountability isn’t just about compensation, nor is it necessarily about a focus on individual teachers. I have long advocated accountability based on school value-added measures. There are many reasons to think that replacing school-level snapshots with school value-added measures would produce noticeably better results.

It is increasingly likely that value-added measures will soon be required by the federal government for evaluating whole schools. If these changes are to have any positive impact, the policy makers in charge of designing accountability need to understand the tool they are working with, and educators, as the subjects of that accountability, need to understand the meaning of the measures intended to capture their performance and guide their careers. Nobody will respond well to performance-based accountability if it is neither trusted nor understood. My hope is that a better understanding of value-added will be a first step toward better design of educational accountability systems.

Douglas N. Harris is associate professor of educational policy and public affairs at the University of Wisconsin–Madison. This article is adapted from Value-Added Measures in Education: What Every Educator Needs to Know (Harvard Education Press, 2011).




Also by this Author