Total Pageviews

Search This Blog

Friday, March 4, 2011

Just the facts: value added measure

Value-added” measures

“Value-added” measurement aims to project a student’s test score growth over time, after attempting to adjust for poverty and other factors known to affect achievement. Look at all of a teacher’s student scores and you’ll see how effective that teacher is, the theory goes.

But life is more complicated than a value-added algorithm. In real life, some teachers get students who are harder to reach for all sorts of reasons. They may have an extra share of problems with language, motivation, disabilities, or classroom discipline. And each year, the students change. So let’s ask the research. Can you tell a star teacher from an ineffective one by looking at their value-added scores?

In a word, no.

For one thing, value-added scores swing wildly from year to year. If your score puts you near the bottom this year, chances are you’ll be a lot higher next year. This year’s value-added score predicts next year’s score only moderately better than a roll of the dice.

The Gates Foundation is a major backer of using value-added scores to evaluate teachers, but an independent analysis of data from Gates-funded research casts doubts on the validity of those scores. That research found that 40 percent of teachers who landed in the bottom quartile based on their students’ state test scores placed in the top half when a different test was used.

Another inconvenient finding: Although value-added scores are supposed to adjust for factors like poverty, they apparently don’t. One study found the same teachers got better value-added scores when they taught more academically advanced students, fewer English-language learners, and fewer low-income students.

So what are value-added scores good for? Sadly, they became a prime teacher-bashing weapon last summer when the Los Angeles Times published teachers’ names and their value-added scores as calculated by the newspaper.

There was less media buzz about the parade of eminent test experts who warned that these scores don’t come close to describing a teacher’s effectiveness.

Ten of the most prominent leaders of the scientific community reviewed all the evidence and concluded that nobody should make important decisions on the basis of value-added scores because they “do not adequately take into account the extra challenges of teaching at-risk students, even though they are intended to do that.” The experts specifically criticized plans by some states to give value-added scores up to 50 percent weight in evaluating teachers. Relying so heavily on these scores, they said, “could create disincentives for teachers to take on the neediest students.”

That’s no way to close achievement gaps.

In human terms.
Tennessee is the birthplace of “value-added” scores. The system has been in use there since 1993. How is it working?

After years of excellent value-added scores, middle school math teacher Angie Jordan got the bad news last fall that her scores were in the lowest category. She had a lot of company: Value-added scores slumped all across the state. But that

wasn’t much consolation. “I was in tears,” she says. “I was thinking, ‘Why did I work so hard? I couldn’t have done worse if I had just shown videos.’”

Why did her scores take a dive? New curriculum? New standards? A glitch in the scoring? She can’t find out because both the test and the value-added calculations are secret.

http://www.nea.org/home/42390.htm

No comments:

Post a Comment