Evaluating teachers: Can the governor's reform panel get it right?

The education reform panel created by Gov. Nathan Deal is tackling many education challenges, but perhaps none thornier than how to evaluate and pay teachers.

The panel is considering abandoning the traditional model that rewards longevity and advanced degrees and instead link teacher raises to student test scores. Such value-added measures have resulted in ineffective ratings for even teachers whose students attain high scores, including Sheri G. Lederman, a New York teacher suing over a low rating that she, supported by national testing experts, maintains is the outcome of an unproven and unreliable formula.

NEA President Lily Eskelsen Garcia

Credit: Maureen Downey

icon to expand image

Credit: Maureen Downey

“It is not only unproven, it’s proven to be corrupting,” said Lily Eskelsen Garcia, president of the 3 million member National Education Association, who was in Atlanta Tuesday for a town hall event with teachers. “If you use test data for something it was never designed to do, then you will corrupt the data, corrupt what you are trying to assess and measure. All the research says warning, warning, don’t do it.”

Having taught in a homeless shelter in Salt Lake City, Eskelsen Garcia said, “If you were going to base my evaluation on my student test scores — students who, by definition, are a transient community — I would last about 15 minutes, and I was Utah Teacher of the Year. As someone who loves being evaluated, you could not truly judge the skills I had working with students, whether gifted students in the suburbs or kids in the homeless shelter, by looking at test scores.”

Nor can a teacher’s effectiveness be captured in a brief classroom observation, she said. “I got five out of five on all my teacher evaluations because I had fabulous bulletin boards. You were walking into Disney World walking into my room. I like a lot of colorful things, but that, too, has nothing to do with my effectiveness at teaching.”

Georgia lawmakers have derided teacher evaluations as meaningless because everyone earns a satisfactory rating. But deeper evaluation systems are apparently rare. One national survey found 87 percent of employees and managers felt performance reviews were neither useful nor effective. A 2012 survey noted that 98 percent of the human resources managers did not think annual reviews were helpful. Best practices now recommend constant feedback, analysis and refinement rather than a calendar-dictated, checklist-driven review.

Few teacher evaluation models are regarded as both reliable and capable of giving teachers beneficial feedback. One cited by many education leaders, including Eskelsen Garcia, is the Professional Growth System in Montgomery County, Md.

Developed by the teachers' union, school board and district, the Professional Growth System involves induction and mentoring of new teachers, ongoing job-embedded professional development and Peer Assistance and Review or PAR for teachers struggling to meet professional standards. In place for 15 years, the system is credited with contributing to the district's rising academic achievement and relatively low teacher turnover.

The multi-layered approach allays teacher fears their assessment rests on the judgment of a single administrator or single set of test scores. In Montgomery, master teachers evaluate teachers found to be under performing. If the teachers continue to struggle despite mentoring and support, a panel of teachers and principals decides whether termination is justified. Hundreds of teachers have been fired or made the choice to leave rather than go through the PAR program.

“It is not a dog-and-pony show of flashy bulletin boards; it's not where you sit on the edge of your seat and hope your special ed kids hit a certain cut score on a test or you might lose your job,” said Eskelsen Garcia.

Deal's education reform panel seems to understand the complexities. At the most recent meeting, members paid close attention when interim Fulton Superintendent Kenneth Zeff warned them, “If evaluations don’t work, we have a problem.”