Getting Evaluation Right

The verdict is in … traditional approaches to teacher evaluation aren’t working. See the recent Rand report assessing the multi-year, multimillion dollar Gates effort, that found: “the initiative did not achieve its goals for student achievement or graduation, particularly for LIM students. ”

In Here’s How Not to Improve Public Schools, Cathy O’Neill argues that the Gates initiative did more than “not achieve its goals” … it actually “unfairly ruined careers, driving teachers out of the profession amid a nationwide shortage. And its flawed use of metrics has undermined science.”

And in a recent opinion piece in Forbes by Peter Greene, he simply states: “Creating a teacher evaluation system is hard—really hard.”

But there is a way to get evaluation right … for the past several years I have been working with schools who are intentionally designing systems that build collective efficacy. Join me at the IB Conference in Vienna this October to learn how to get evaluation right, or contact me at Tigris Solutions.

Evaluation Systems Need Fixing

From a recent Edweek article:

“It’s clear to most educators that the current crop of teacher-evaluation systems is flawed, overwrought, and sometimes just plain broken …”

Consider IDEO’s findings about traditional annual reviews:

“No one likes annual reviews: They’re structured, overly formal, and they make it difficult to get real feedback that you can act upon.”

And a recent Rand study in which:

“Only 31 percent of teachers reported that they have sufficient time to collaborate with other teachers.”

Rethink evaluation by finding out about new approaches that work by building collective efficacy. Come to my pre-conference session on Opening Classroom Doors at the IB Global Conference in October. Or attend my session on Observers as Learners at Learning Forward this December. Or better yet, contact me at Tigris Solutions. There are better ways to enhance professional practice!

Collaboration=Amplification

Over the past 6 months, I’ve had the opportunity to co-facilitate a truly great professional learning program. It’s a partnership between the NJEA (teacher’s association) and NJPSA (principal’s association) to offer a series of collaborative opportunities for teachers and administrators to work together to refine evaluation practices. Too often, evaluation systems pit educators against each other: teachers vs principals. When true collaboration occurs, the system is refined, made productive, and ultimately reaches the intended goal: improving instruction for students.

We’re building on the idea shared by Randy Nelson: that collaboration is not just souped-up cooperation, but something altogether different. True collaboration amplifies the abilities of those involved, resulting in a better product than individuals can accomplish alone.

Last Monday, the first cohort came together to consider their current practices, unpack their expectations and belief, and commit to changes both teachers and supervisors can make to improve the system. They will meet again in December to review their work and continue planning. The second cohort is scheduled to meet at the end of October and begin their journey.

collaboration

Based on the success of the first session, additional cohorts will be added for next year. If you are a NJ educator, you’ll want to check this out and consider signing up a team from your district: Collaborating to Strengthen Your Educator Evaluation System.

Funny, It Doesn’t Feel Like June

Sure it’s cold outside and the end of the year feels far, far away. But believe it or not, it’s time to be thinking about your summative evaluation.

No matter which teacher practice evaluation model your district is using, there is a standard or domain that deals with professional responsibilities (Standard 1 in McREL, Domain 4 in Danielson, Domains 3 and 4 in Marzano, Standard 6 in Stronge, Standard F in Marshall).

This is the “backstage” work of teaching — very little of it can be seen when visiting a classroom to conduct walk-throughs or observations. This is an evaluation area dealing with participation in the professional community, leading and collaborating, and practicing in an ethical manner. For many years, teachers have been evaluated on these criteria in a binary fashion: satisfactory or not. Now part of the teacher evaluation process, professional responsibilities criteria must be examined and rated on (minimally) a four-level rubric.

Most of the evaluation models are rather generic when it comes to describing a teacher’s professional responsibilities. In school districts where the rubrics have not been further developed to provide concrete local exemplars of effective and highly effective practice, both supervisors and teachers are understandably perplexed about what constitutes enough data for analysis and exactly what those data represent.

This has resulted in something I like to call “Shopping Bag Syndrome.” Just prior to the summative evaluation conference, teachers frantically grab documents that represent their professional practice throughout the school year. They haul reams of documentation into their evaluation conference. It’s time to change this practice: the issue is not quantity, but quality.

Consider the Metropolitan Museum in New York City. It houses millions of works of art spanning all of human history. It can’t possibly display everything! Instead, the curators choose a small percentage of its available artwork to exhibit in order to tell a specific story.

If you gathered every piece of paper or digital artifact that represents the work you do all year, it might fill a museum gallery as well! But this is both impractical and burdensome. Instead, you should curate — just as the Metropolitan Museum does — and carefully select a few items that represent the high quality of your professional responsibilities throughout the year.

If a supervisor tells you it’s not enough “stuff,” ask for a specific description of what is needed. If it’s just about collecting paper, you can easily do that. But how does that translate into effective and highly effective practice? Insist on clear exemplars for each of the professional responsibilities.

Why think about this in February? To avoid frantically scrambling through your files in June. This work is too important to be reduced to dumping reams of paper in shopping bags. Allow plenty of time to curate and gather a few exceptional examples of your practice that demonstrate your highly effective professionalism.

This article appeared in the February issue of the NJEA Review.

 

The tension between teacher evaluation and professional growth

In Evaluating America’s Teachers, W. James Popham writes:

Formative teacher evaluation describes evaluation activities directed toward the improvement of the teacher’s ongoing instruction … summative teacher evaluation refers to the appraisal of a teacher …

… a teacher who needs to improve must honestly identify those personal deficit areas that need improvement. Weaknesses can’t be remedied until they’ve been identified, and who knows better what teachers’ shortcomings are than those teachers themselves? But if the teacher is interacting with an evaluator whose mission, even partially, may be to excise that teacher from the teacher’s job, do you really think most teachers are going to candidly identify their own perceived shortcomings for such an evaluator? Not a chance!

Suggesting a major overhaul to the way we are doing things.

When is a Teacher NOT a Teacher?

When she is a counselor, or a speech therapist, or a librarian, or a coach, or on the child study team … you get the point.

There are many education professionals that work in our schools to support students but don’t “teach” in the traditional sense, interacting with classrooms filled with students. In most districts they are considered “teachers” as part of their employment contract. However, their jobs are not really the same. Most of them don’t interact with large group of students in a classroom setting.

However, their jobs are critically important. And according to teacher evaluation regulations, their job performance must be evaluated using the district’s selected evaluation tool. For many, this is the epitome of trying to force a square peg into a round hole.

Some of the evaluation models in use across the state have job-specific rubrics to accommodate the accountability requirement. For example, the Danielson Framework for Teaching also provides Frameworks for Instructional Specialists, Library/Media Specialists, School Nurses, School Counselors, School Psychologists, and Therapeutic Specialists. These can be found in the 2007 publication of Enhancing Professional Practice: A Framework for Teaching (chapter 5). They are also available from districts using Teachscape as a data collection tool. One final suggestion is to contact the Danielson Group and request job-specific rubrics.

The Stronge Teacher Performance Evaluation System will also provide a separate performance system for Educational Specialists (e.g. counselor, instructional coach, librarian, school nurse, school psychologist, school social worker, and selected other positions). Districts using the Stronge model can request those systems by contacting www.strongeandassociates.com

Marzano districts can contact Learning Sciences International for the Non-Classroom Instructional Support Member Evaluation Form. These are standard issue for any district purchasing materials and software from Learning Sciences.

McREL users are not so fortunate; there are no rubrics for educational services teaching staff. At this point, they typically use their existing instruments. However, individual districts in NJ have created their own rubrics to use in the McREL format. Teachers in McREL districts should contact EIRC and request examples that have been created.

For those in Marshall districts, Kim Marshall suggests contacting a Massachusetts school district that has developed “tweaked” Marshall rubrics 11 other job descriptions. Email Lisa Freedman (LFreedman@Westwood.k12.ma.us) who will share those rubrics that have been created.

No matter the model, it’s important to consider that these important jobs: nurses, counselors, coaches, librarians, therapists, child-study teams, etc., etc. … look very different from district-to-district. The job descriptions may vary, even within one district (consider the difference between high school and elementary library/media specialists). Therefore, all criteria and rubrics must be considered contextually; those educational professionals in “not-a-teacher” jobs must take a careful look at the evaluative criteria to see if it actually reflects their work. If not, teachers should recommend the rubrics be revised to more accurately describe their responsibilities—and clearly indicate the difference between effective and highly effective practice.

This work is simply too important to keep pushing a square peg into a round hole.

Note: This post originally appeared in the December issue of the NJEA Review.

Take Ownership of Your Observations

This article appears in the November issue of the NJEA Review.

Sweaty palms, tongue-tied, worried about every gesture, every word … that’s often how teachers feel during a classroom observation. Sure, it’s easy to say, “Don’t worry—just go about your normal teaching routine!” But we all know that the stakes are high. We all want to put our best foot forward, especially when the boss is watching.

And that’s really the problem—the idea that someone is watching, scrutinizing our every move. In reality, observations done by a skilled practitioner can produce extremely useful data that can help teachers improve in this incredibly demanding profession.

To turn the observation from an inspection into a useful data collection opportunity, both teachers and observers have to make sure that a few key things happen:

  • Teachers should try to relax and continue doing the good job they always do. Putting on a “show” for the sake of an observation is often a recipe for disaster. Even if you think you can pull it off, there’s bound to be one student who announces, “But we never do it this way!”
  • Observers need to continually hone their craft and become skilled at data collection. This requires broadening one’s perspective when in a classroom, tuning in to the nuances of a teacher’s language and instructional moves, listening to student conversations, and noting as much as possible.

Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions.

Once an observation is complete, the data should be shared. Teachers, if you aren’t receiving the data, ask for it. That’s helpful information, because we all know that in the heat of the moment, it’s hard to remember exactly what happened during a lesson. Good, objective data can be a valuable tool and should be used as just that.

Once you receive the observation data, analyze its objectivity.

  • Is the data set a record of statements and/or actions made by teachers and students?
  • Is the data set objective in terms of its description, or are judgment or bias creeping in?
  • Is the data set complete or were important aspects of the lesson not captured?

Remember that the data collection job is not easy. Because it’s impossible for an observer to catch everything, teachers should contribute to the data set. For example, skilled teachers formatively assess in unobtrusive ways. They make adjustments to their lessons based on student responses, and they don’t often announce that they are departing from their plans. It is challenging for an observer to capture these subtle and skilled instructional moves, so be sure to share that information in an effort to round out the data set.

Look for the word “engagement” within the observation data. Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions. Teachers should discuss this with observers and request a more thorough data collection that indicates how the observer arrived at that conclusion. Sharing student work often provides a more accurate depiction of student engagement levels.

I can tell you not to worry—but I know you probably will. The observation process produces anxiety, no matter our level of experience. However, we can try to lessen that anxiety by taking ownership of the process. Participate fully, know the rubrics, contribute data and lead the conversation about instructional practice.

Want answers about teacher evaluation? Ask!

 This column appears in the October 2014 issue of the NJEA Review.

Ah, October. The smell of pumpkins, fall foliage, a chill in the air … and a critical deadline that affects all teachers in New Jersey. According to AchieveNJ regulations, a district must annually notify all teaching staff members of the adopted evaluation policies and procedures no later than Oct. 1. If your superintendent hasn’t provided this information to your staff, it’s time to start asking questions.

Your District Evaluation Advisory Committee (DEAC) should have made a series of recommendations to the district superintendent regarding the design of a district’s teacher evaluation system. These decisions go well beyond the selection of a model (Danielson, Marshall, Marzano, McREL, Stronge, etc.). The district should compile all of the policies and procedures related to how it will implement the evaluation process for all staff members so that everyone knows what to expect from their observations and can begin to prepare now for summative evaluations.

For example, do you know the planned timeline of your observations for the year? The regulations require a minimum of one observation per semester (and three for the year if you are a non-tenured teacher). But there are other important timing considerations. Will your announced observation come first (so that you can take advantage of a pre-conference) and your unannounced come later in the year? Will there be a planned gap between observations (giving you an opportunity to reflect on your practice and consider opportunities for growth)? Timelines are important.

Do you know how your district is approaching the behind-the-scenes work of teaching (such as Instructional Planning and Professional Responsibilities)? These cannot be observed (for the most part) during classroom instruction — so how will they be assessed and when? What constitutes exemplary practice in, for example, record-keeping? Teachers need to be aware of these expectations and build a portfolio throughout the school year so you’re not scrambling in May to locate evidence in time for a summative evaluation meeting.

Most critically … what is your district’s approach to creating a summative score based on your yearly observations? Many schools are using software packages that default to a straight averaging method — one that is not conducive to highlighting teachers’ strengths or need for remediation. Will the district use a conjunctive formula (typically associated with the Marzano system)? Or a holistic approach (more often used with the Stronge model)? Perhaps there is a growth-oriented approach (only using the ratings from the strongest observation) or a modality focus. This is one of the more critical DEAC considerations and system design decisions that must be made and communicated to every teacher by October 1. Teachers must know how they will be assessed during their summative evaluation meeting; those conversation should never be a surprise.

Teachers and their supervisors all need to be on the same page. So if the October 1 deadline has come and gone, be sure to ask: What are the district policies and procedures regarding evaluation? A healthy system is one that keeps everyone informed.

Let’s hear it for slow and steady …

NJ policymakers are finally noticing that the runaway train of teacher evaluations needs some brakes applied. After several intensive sessions last week, an agreement was reached on key issues related to standardized testing and its use in evaluating teachers. A commission has been established to take a look at the entire standardized testing environment. At the same time, student growth objectives have been reduced (to 20%) of a teacher’s evaluation rating.

More on the story here.

Will other states take notice?