Total Pageviews

Search This Blog

Tuesday, April 24, 2012

Pearson: 250 Million. Kids, teachers: Zero.

From the Examiner by Jennie Smith

$250 million. That is what the state of Florida (read: you, me, and every other adult you know) is paying NCS Pearson to administer and score the FCAT through the end of 2013.

$250 million could buy a lot. Should buy a lot. But in the case of Pearson contracts, Florida seems to be getting ripped off.

It's that time of year again, feared and dreaded by students and educators alike in the Age of Accountability: FCAT Season. Students can be retained in third grade if they don't pass their FCAT. They will be denied high school graduation if they don't pass their FCAT. For the first time ever, 50% of teachers' evaluations will be based on student FCAT scores (often not their subject, in many cases not their own students). And in the Age of Accountability, Evaluations Mean Something. Two out of three negative (or two negative in a row) mean the unemployment line. As of 2014, those same scores will also determine pay--not just a bonus on top of base pay, but a permanent raise or none. As they have been for many years now, schools will be assigned a letter grade based on student FCAT performance--only now the test is harder and the "cut" scores (scores considered passing) are higher, meaning hundreds of schools in Dade County alone will see their rating drop as much as two letter grades.

It's about time, say the proponents, including the Tallahassee lawmakers who passed SB 736 last year (mandating 50% of teachers' evaluations be based on student test scores) and the Florida Department of Education, which supported drastic changes to the FCAT cut scores and school accountability system (such as including special education students' and English language learners' FCAT scores in the calculations of the school's grade, and considering special education students graduating with a special diploma "dropouts"). Standards must be raised. We must not get complacent. We should make sure teachers are doing their job. We should make sure students have mastered the skills and knowledge necessary to move on to the next grade or to college.

Except...

The FCAT has been subject to its fair share of criticism for years, from bloggers to parents to the Office of the Inspector General of the U.S. Department of Education. But this year, as test administration changes from paper and pencil to computer, there are a whole new set of concerns.

This is a secure test...(ssssh, don't tell your friends what's on it).

Education reporter Laura Isensee exposed in the Miami Herald Friday, April 20, that the tenth grade FCAT reading tests use the same passages and questions--and there is a "testing window" as long as two weeks, since the test has moved online and most high schools do not have enough computers to administer the test to all tenth graders on the same day. (This was confirmed by the Florida Department of Education in vague terms by the article, and has been confirmed by word of mouth within my own school.)

Why does this matter? After all, students are not allowed to take any papers or notes out of the testing room, and have to turn in their cell phones and other electronic devices before the test begins. They could not possibly memorize all the questions and answers on the test and then share them with their friends, could they?

Perhaps not. But the FCAT reading consists of a few passages with 26 questions. Many of the passages used could be found outside of the testing lab. When students leave the test, they can tell their friends (or post on Twitter) the titles of the passages in the test. Seeing the passages ahead of time would give a great advantage to students taking the test in the second or third group.

They could also report back to classmates what vocabulary words they will be asked about, or what background knowledge they will be expected to know--giving those students the opportunity to look them up before testing.

The Florida Department of Education has made students sign a pledge this year not to talk about the content of the test or share test information with anyone.

Well, in that case...whew! Good thing the test is secure. I feel very confident now that the test is an accurate assessment of every child's reading skills and a good basis for my evaluation!

End-of-Course Exams...brought to you by Pearson, Inc. (Who else?)

SB 736, the "teacher tenure bill" that passed in the 2011 session, mandates a standardized test for every subject taught by every school in every district in the state by 2014 (without providing a penny of funding for the creation, piloting, implementation and scoring of those same tests, of course). These are the infamous new "end-of-course exams," better known as EOCs, and NCS Pearson has the lucrative contract.

Naturally, these new EOCs are all computer-based as well...contributing to the two-month testing calendar at many high schools lacking sufficient computers.

The testing windows for the EOCs are just as long as those for the FCAT reading.

That means that the possibility of cheating by reporting content is just as strong...making the tests just as invalid, and consequently the evaluations of teachers based on those tests (by 2014, all teacher evaluations will mandatorily be based on the EOC in their content area) just as invalid.

While students may not memorize exact questions from the test, they could (and will) very easily report back to their peers what concepts were tested--therefore, giving later test takers the advantage of "brushing up" on those concepts and leaving aside ones that are not covered on the test.

The test questions are harder. (And some of them are misleading or flat-out wrong, too!)

Robert Krampf, a science blogger, checked out the FLDOE's FCAT Science Test Item Specifications as he made FCAT practice questions to help students review for the test. He found that some definitions listed in the specifications were flat-out wrong: for instance, the definition of a predator as "an organism that obtains nutrients from other organisms," and germination as "the process by which plants grow from seeds to spores or from seeds to buds."

He also found multiple-choice questions where some of the "wrong answers" were scientifically correct answers.

The response of the FLDOE? Fifth grade students would not be expected to know enough science beyond the benchmarks to know that the scientifically correct answers were actually, well, correct.

"[w]e need to keep in mind what level of understanding 5th graders are expected to know according to the benchmarks. We cannot assume they would receive instruction beyond what the benchmark states."

In other words, a student with a really great science teacher who has taught beyond the benchmarks in the curriculum, or a very bright, passionate student with a thirst for science who pursued more knowledge on his own, would be penalized for choosing a "wrong" answer even though it was actually right.

Reading passages on the test have been found to be confusing and misleading.

Enter the story of the "sleeveless pineapple," a nonsensical children's story that was used across several states on standardized, high-stakes tests provided by Pearson, which was thrown out by the state of New York after being re-examined and deemed confusing and misleading.

While all of the questions are being removed from the children's scores in New York, the passage and questions have already been used on countless other tests across the country, along with all the high-stakes consequences that go along with them.

Problems with the test questions cannot be reported by teachers, as they are not allowed to see the test.

The following scenario was reported by one teacher (and similar stories have been echoed throughout the state): As students take their algebra end-of-course exam (the tests being phased in to gradually replace FCAT--more on that momentarily), a student who has worked out the problem does not see his answer in the multiple choice answer bank, and raises his hand. The teacher sees that the student has worked out the problem correctly, and that the correct answer is indeed missing from the answer choices.

The teacher can do nothing about this situation, because by law he is not allowed to see the test or assist students taking the test in any way.

Thus, wrong questions with wrong answers pass by in silence, and can determine whether students advance to the next grade level, get credit for a class, or graduate from high school.

And the results of those students will determine which teachers keep or lose their jobs, and which teachers are eligible or not for a raise.

If bubble-tests weren't bad enough, now the writing tests are scored by computers, too.

As a teacher, I have always had a big problem with multiple choice tests. They do not encourage students to think creatively or in depth, but rather to narrow down choices using process of elimination, thus accepting answers given to them by someone else. For my own tests, I prefer open-ended short answer questions and essays. Sometimes I have even been enlightened by my own students' insights, as they have brought perspectives I had not even considered, and justified them so well that I cannot help but agree with their point by the end.

Some of those same students do not perform as well as one would expect intelligent children to score on standardized tests, precisely because they overthink the questions and answer choices, and start going into all the "but what if"s for each question.

The only problem with open-ended questions and essays, as any teacher can tell you--particulary those of us with over 200 students, thanks to Florida's revamped class size amendment--is they take time to grade. I can grade an entire class of multiple-choice or fill-in-the-blank tests in the time it takes me to grade one student's paragraph. (That includes giving detailed feedback--but then, in the Age of Accountability, there is no place for constructive feedback; it is enough to say an answer is right or wrong, and leave it at that.)

Well, Pearson and friends have found a solution to that dilemma now, too. It was not enough to have debatably incompetent "readers" grade at least 30 essays an hour--now they have a computer program that can spit out 16,000 essays in 20 seconds. Enter the Robo-Reader.

Not surprisingly, cynics have found that you can trick the Robo-Reader into high scores based simply on the wording and sentence length--it cannot score content or determine the truth of what one is writing. Nor does it detect plagiarism.

I have played around with online essay scorers, purposely writing nonsense in an elegant way, and have received high scores.

This is the future...and these scores, too, could determine the fate of children and teachers.

For the honor, Pearson will take your money, thank you very much.

In the Age of Accountability, the only ones not accountable for anything, apparently, are the testing giants who have lobbied hard for the "accountability" legislation. Then again, they are accountable to someone...it's just not children or teachers.

They are accountable only to their shareholders.

Continue reading on Examiner.com http://www.examiner.com/article/pearson-250-million-kids-teachers-zero#ixzz1t0pO6gNP

No comments:

Post a Comment