Total Pageviews

Search This Blog

Thursday, January 13, 2011

New analysis challenges Gates study

From the Washington Posts Answer Sheet

By Valerie Strauss

Last month, a Gates Foundation study was released and said to be evidence of the validity of “value-added” measures to evaluate the effectiveness of teachers by using students’ standardized test scores. But a new analysis of that report concludes that the substance of the report doesn’t support its conclusions.

The report released last month was called “Learning About Teaching: Initial Findings from the Measures of Effective Teaching Project,” by Bill & Melinda Gates Foundation officials Thomas J. Kane and Steven Cantrell.

They used data from six major urban school districts to examine correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. Kane and Cantrell concluded that the evidence suggests that value-added measures can be constructed to be valid; others described the report as strong evidence of support for this approach.

But Economics Professor Jesse Rothstein at the University of California at Berkeley reviewed the Kane-Cantrell report and said that the analyses in it served to “undermine rather than validate” value-added-based measures of teacher evaluation.

The review by Rothstein, who in 2009-10 served as senior economist for the Council of Economic Advisers and as chief economist at the U.S. Department of Labor, is being published today by the National Education Policy Center, housed at the University of Colorado at Boulder School of Education.

The MET report uses data from six major urban school districts to, among other things, compare two different value-added scores for teachers: one computed from official state tests, and another from a test designed to measure higher-order, conceptual understanding. Because neither test maps perfectly to the curriculum, substantially divergent results from the two would suggest that neither is likely capturing a teacher’s true effectiveness across the whole intended curriculum.

By contrast, if value-added scores from the two tests line up closely with each other, that would increase our confidence that a third test, aligned with the full curriculum teachers are meant to cover, would also yield similar results.

The MET report considered this exact issue and concluded that “Teachers with high value-added on state tests tend to promote deeper conceptual understanding as well.” But what does “tend to” really mean?

Rothstein’s reanalysis of the MET report’s results found that over 40 percent of those whose state exam scores place them in the bottom quarter of effectiveness are in the top half on the alternative assessment.

“In other words,” he said in a statement, “teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations.”

Put another way, “many teachers whose value-added for one test is low are in fact quite effective when judged by the other,” indicating “that a teacher’s value-added for state tests does a poor job of identifying teachers who are effective in a broader sense,” Rothstein wrote.

“A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective.”

If those value-added results were to be used for teacher retention decisions, students would be deprived of some of their most effective teachers, Rothstein concluded.

http://voices.washingtonpost.com/answer-sheet/research/new-analysis-challenges-gates-.html?wprss=answer-sheet

No comments:

Post a Comment