Total Pageviews

Search This Blog

Sunday, February 26, 2012

Florida's new teacher evaluation formula. There is no research that really, truly shows this is proven and valid to a teacher's performance but hey lets go with it right!

From the Gainsville Sun

by Jackie Alexander

Teachers and school administrators are working under new evaluation requirements spurred by the Student Success Act passed by the Legislature last year, but one piece of the puzzle has district officials worried: the use of a “value-added” model.

It could be boom or bust, said Steven Stark, Alachua County Public Schools director of research and assessment.

“This year,” he said, “we're shooting in the dark.”

Teacher evaluations as designated by Senate Bill 736 are to contain three components, including principal observation, lesson study and a value-added score, which relies on student test data results.

Each component is worth 100 points, but weighted differently.

Principal evaluations are now graded on a five-point scale from unsatisfactory to highly effective. Principals are also required to complete two observations for beginning teachers. Those evaluations count for 40 percent of a teacher's overall evaluation.

Lesson studies require teachers to collaborate and create a lesson while one teacher in the group carries out the lesson. The group then critiques the effectiveness of the lesson, which amounts to 20 percent of the total evaluation.

It's the last component, which accounts for 40 percent of a teacher's total evaluation, that has teachers worried.

How teachers fare on their evaluations means more under Florida's Student Success Act. Two years in a row of unsatisfactory performance, and you're out. Perform well and raises ensue.

Value-added models, as explained by Stark during a School Board workshop, intend to measure if a teacher added any value or knowledge to a student throughout the course of the school year.

The value-added model score, or VAM score, is the teacher's score as well as half of the school's VAM score.

The calculation looks at where a student begins on the educational spectrum, predicts how they will perform at the end of the year and then compares the prediction to the actual test results.

The two biggest factors in determining the predicted score, Stark said, are how the student performed on the previous years' FCATs.

With the FCAT having changed over the past two years, coupled with the change in scores needed to pass the test, school administrators are skeptical.

Kanapaha Middle School Principal Jenny Wise said teachers are concerned.

“Because the FCAT changes from year to year to year, I don't think they (teachers) have a warm fuzzy about what their VAM will look like,” she said. “Everything we've done to prepare students to do well (on the FCAT) has changed now.”

Last year students took the FCAT 2.0, touted as a more difficult test requiring deeper understanding of course material as well as more questions to gauge critical thinking.

Education experts have said that the FCAT wasn't designed to assess how well teachers perform, and that the test barely is a barometer for student performance, as scores have been designed to ensure a certain number of students achieve rather than measuring what they achieve.

The model also factors in dozens of other elements, including student attendance, intervention methods and various learning disabilities.

Those variables will be calculated by American Institutes for Research, which works with the Los Angeles Unified School District and New York City schools.

Teachers won't know how they fared on the model until 2013, officials said.

Teachers who don't teach FCAT-tested subjects and grades will receive the schoolwide reading VAM score for their evaluation, Stark said.

Local teachers union President Karen McCann said education experts and researchers are still divided as to whether the value-added model indicates how well a teacher teaches.

“There is no research that really, truly shows this is proven and valid to a teacher's performance,” she said.

A recent Harvard University study argued that using VAM scores in teacher evaluations could lead to teaching to the test or cheating. According to an investigative report by the state of Georgia, it was the pressure of a data-driven environment that led to a widespread cheating scandal in the Atlanta School System.

Howard Bishop Middle School Principal Mike Gamble said he spoke with a researcher who worked on the model to gain more understanding.

“I think this is a step in the right direction, and it's an attempt to be more fair as opposed to just the growth on the FCAT,” he said. “It's far from perfect obviously.”

http://www.gainesville.com/article/20120225/articles/120229643

No comments:

Post a Comment