Many of my friends are teachers. In New Mexico, we’ve all received our annual performance reviews, based on observations of the Danielson teaching characteristics, classroom surveys, and in some cases, value added modeling of test results or “VAM” scores. Some were given their evaluations just days ago, in the final hours of the school year.
Talk of those evaluations has dominated end-of-year parties, walks, and phone conversations, because most of my friends are either disheartened by the results and/or disillusioned about the validity of the whole system.
Some of the results of those performance reviews:
- An English teacher rated highly effective. She has taught seniors for many years, so has no test results as part of her evaluation.
- An art teacher who was rated highly effective. Comparing his evaluation to those of his fellow art teacher, who received a lesser score, he found that the other teacher had higher test scores, and that they had comparable observation ratings from a principal who does not seem to know or care about art.
- A multi-endorsed teacher rated effective. Despite his “acceptable” rating, he was dissatisfied to find his score depressed because many high tests scores from past years were not included in his evaluation. If they were he, like some other teachers in his building, might being given the opportunity to move up the licensure and pay scale.
- A third-grade teacher rated minimally effective. While her observation scores were admirable, the VAM sample of 5 students’ test results were very low, dragging her overall score down. This, despite a year when she usually arrived to school at 6 am and co-planned on Sundays with a teacher next door who was rated effective.
- A fourth grade teacher rated ineffective. For years, she has generally been regarded as among the best elementary school teachers in my district.
I won’t tell you what my evaluation mark was, except to say that it pleased me. Based on my friends’ experiences, it was evident that it would have been lower if I had spent all my time teaching in the classroom instead of performing professional development for others, and it would have been depressed if it included VAM scores. My wife also told me she thought my principal was softer than hers. She’s probably right.
Given these cautionary tales, we should be careful about how much we allow our evaluations to affect us. We should not judge ourselves based on small samples of data that do not reflect the scope of our work. We should also not compare ourselves to each other with a system using tests of different levels of quality, samples of different and questionable size, and evaluators who may or may not understand our disciplines.
So what do we do with all this information? We should treat ourselves as we treat our students! We should admonish ourselves that, regardless of rating, like every student, we can and will grow. We should find a grain or spoon or cupful of salt to take these evaluations with (even if they are favorable), and find something in them to learn from. We should examine what we feel about this evaluation system and its results, and then let it go so it doesn’t consume our precious summer rest.
And then we should place a bookmark in our hearts for the Fall, to remember when we return that we teachers are important, smart, influential, and strong in number, then band together to improve this system so it actually helps us be better at what we do.
- Funding Gifted in Tight Times - May 21, 2017
- Press Release from Santa Fe: Equitable ID and Services - August 20, 2016
- Call for Award Nominations - July 24, 2016
- PARCC and Gifted Education - February 23, 2016
- NMAG On Demand - November 11, 2015
- Call for Nominations for Award - August 18, 2015
- Letter from the President: Teacher Evaluations - June 16, 2015