Getting Evaluation Right

The verdict is in … traditional approaches to teacher evaluation aren’t working. See the recent Rand report assessing the multi-year, multimillion dollar Gates effort, that found: “the initiative did not achieve its goals for student achievement or graduation, particularly for LIM students. ”

In Here’s How Not to Improve Public Schools, Cathy O’Neill argues that the Gates initiative did more than “not achieve its goals” … it actually “unfairly ruined careers, driving teachers out of the profession amid a nationwide shortage. And its flawed use of metrics has undermined science.”

And in a recent opinion piece in Forbes by Peter Greene, he simply states: “Creating a teacher evaluation system is hard—really hard.”

But there is a way to get evaluation right … for the past several years I have been working with schools who are intentionally designing systems that build collective efficacy. Join me at the IB Conference in Vienna this October to learn how to get evaluation right, or contact me at Tigris Solutions.

Evaluation Systems Need Fixing

From a recent Edweek article:

“It’s clear to most educators that the current crop of teacher-evaluation systems is flawed, overwrought, and sometimes just plain broken …”

Consider IDEO’s findings about traditional annual reviews:

“No one likes annual reviews: They’re structured, overly formal, and they make it difficult to get real feedback that you can act upon.”

And a recent Rand study in which:

“Only 31 percent of teachers reported that they have sufficient time to collaborate with other teachers.”

Rethink evaluation by finding out about new approaches that work by building collective efficacy. Come to my pre-conference session on Opening Classroom Doors at the IB Global Conference in October. Or attend my session on Observers as Learners at Learning Forward this December. Or better yet, contact me at Tigris Solutions. There are better ways to enhance professional practice!

Collaboration=Amplification

Over the past 6 months, I’ve had the opportunity to co-facilitate a truly great professional learning program. It’s a partnership between the NJEA (teacher’s association) and NJPSA (principal’s association) to offer a series of collaborative opportunities for teachers and administrators to work together to refine evaluation practices. Too often, evaluation systems pit educators against each other: teachers vs principals. When true collaboration occurs, the system is refined, made productive, and ultimately reaches the intended goal: improving instruction for students.

We’re building on the idea shared by Randy Nelson: that collaboration is not just souped-up cooperation, but something altogether different. True collaboration amplifies the abilities of those involved, resulting in a better product than individuals can accomplish alone.

Last Monday, the first cohort came together to consider their current practices, unpack their expectations and belief, and commit to changes both teachers and supervisors can make to improve the system. They will meet again in December to review their work and continue planning. The second cohort is scheduled to meet at the end of October and begin their journey.

collaboration

Based on the success of the first session, additional cohorts will be added for next year. If you are a NJ educator, you’ll want to check this out and consider signing up a team from your district: Collaborating to Strengthen Your Educator Evaluation System.

The tension between teacher evaluation and professional growth

In Evaluating America’s Teachers, W. James Popham writes:

Formative teacher evaluation describes evaluation activities directed toward the improvement of the teacher’s ongoing instruction … summative teacher evaluation refers to the appraisal of a teacher …

… a teacher who needs to improve must honestly identify those personal deficit areas that need improvement. Weaknesses can’t be remedied until they’ve been identified, and who knows better what teachers’ shortcomings are than those teachers themselves? But if the teacher is interacting with an evaluator whose mission, even partially, may be to excise that teacher from the teacher’s job, do you really think most teachers are going to candidly identify their own perceived shortcomings for such an evaluator? Not a chance!

Suggesting a major overhaul to the way we are doing things.

Observations Deserve a Response

Cross-posted in the January issue of the NJEA Review:

At this point in the school year, teachers have probably been observed at least once. Whether the observation was announced or unannounced, it involved someone visiting the classroom, writing things down, and relating that data to the district’s teacher evaluation model.

Many districts have made significant efforts to make the process collaborative, inviting teacher input and ensuring strong, instructionally oriented conversations. For those who experienced a less than collaborative process, there are steps that can be taken to make one’s voice heard. It’s important to remember that teachers were actually present during these observations, and they might have something valuable to say about their own instruction.

In New Jersey, regulations require a post-observation meeting for every observation. They specifically state that the post- observation conference is for the purpose of:

  • Reviewing the data collected at the observation;
  • Connecting the data to the teacher practice instrument;
  • Connecting the data to the teacher’s individual professional development plan;
  • Collecting additional information needed for the evaluation of the teacher;
  • Offering areas to improve effectiveness.

Although it may be tempting to avoid the post-observation meeting, an electronic communication is a challenging method to address all of these points. So how should teachers prepare?

First, review the data collected. Make sure that it is objective, free from bias and opinion. A list of judgmental statements, quotes from a rubric, or suggestions for improvement are not objective data. Rather, they are the observer’s impressions and judgments that were made on the spot. If the data  received by a teacher appears biased and subjective, it becomes an important topic of conversation during the post-observation conference. Without good data, any discussion of instruction is inherently flawed.

Second, review the data set to analyze how it specifically ties to the criteria and rubrics of the teacher practice instrument. There should be enough data connected to each component or standard to make a judgment about the level of performance. There should be enough data to represent a teacher’s overall pattern of practice throughout the lesson, not simply small snippets of information that might possibly be considered outliers.

Third, offer supplemental data that is relevant to the observation and tied directly to the data collected by the observer. For example, the lesson plan is an important artifact that should relate closely to the data collected around the implementation of the lesson. Student work is also highly relevant, and many observers typically do not have an opportunity to review the work products produced during or at the culmination of a lesson. Share any aspects of the lesson that did not go according to plan—where adjustments were made based on individual students’ needs and abilities, where flexibility was required to handle those unexpected issues that are a regular part of the school day, or instructional changes were made based on the formative assessments  conducted during the lesson.

Fourth, plan on sharing impressions of the lesson, the data collected, and where it falls on the rubric. Consider those aspects of instruction that appear strong, and consider one or two areas that might benefit from a shift in practice. It is important to remember that the observation focus should not necessarily be on things that need “fixing.” Teaching is a complicated business— even when a lesson is very good, it might still benefit from some modifications.

Finally, summarize the observation data, the supplemental data, and the language of the teacher practice instrument. Writing a well-informed response to every observation, whether it was positive or not, ensures that teacher voices are heard.

 

When is a Teacher NOT a Teacher?

When she is a counselor, or a speech therapist, or a librarian, or a coach, or on the child study team … you get the point.

There are many education professionals that work in our schools to support students but don’t “teach” in the traditional sense, interacting with classrooms filled with students. In most districts they are considered “teachers” as part of their employment contract. However, their jobs are not really the same. Most of them don’t interact with large group of students in a classroom setting.

However, their jobs are critically important. And according to teacher evaluation regulations, their job performance must be evaluated using the district’s selected evaluation tool. For many, this is the epitome of trying to force a square peg into a round hole.

Some of the evaluation models in use across the state have job-specific rubrics to accommodate the accountability requirement. For example, the Danielson Framework for Teaching also provides Frameworks for Instructional Specialists, Library/Media Specialists, School Nurses, School Counselors, School Psychologists, and Therapeutic Specialists. These can be found in the 2007 publication of Enhancing Professional Practice: A Framework for Teaching (chapter 5). They are also available from districts using Teachscape as a data collection tool. One final suggestion is to contact the Danielson Group and request job-specific rubrics.

The Stronge Teacher Performance Evaluation System will also provide a separate performance system for Educational Specialists (e.g. counselor, instructional coach, librarian, school nurse, school psychologist, school social worker, and selected other positions). Districts using the Stronge model can request those systems by contacting www.strongeandassociates.com

Marzano districts can contact Learning Sciences International for the Non-Classroom Instructional Support Member Evaluation Form. These are standard issue for any district purchasing materials and software from Learning Sciences.

McREL users are not so fortunate; there are no rubrics for educational services teaching staff. At this point, they typically use their existing instruments. However, individual districts in NJ have created their own rubrics to use in the McREL format. Teachers in McREL districts should contact EIRC and request examples that have been created.

For those in Marshall districts, Kim Marshall suggests contacting a Massachusetts school district that has developed “tweaked” Marshall rubrics 11 other job descriptions. Email Lisa Freedman (LFreedman@Westwood.k12.ma.us) who will share those rubrics that have been created.

No matter the model, it’s important to consider that these important jobs: nurses, counselors, coaches, librarians, therapists, child-study teams, etc., etc. … look very different from district-to-district. The job descriptions may vary, even within one district (consider the difference between high school and elementary library/media specialists). Therefore, all criteria and rubrics must be considered contextually; those educational professionals in “not-a-teacher” jobs must take a careful look at the evaluative criteria to see if it actually reflects their work. If not, teachers should recommend the rubrics be revised to more accurately describe their responsibilities—and clearly indicate the difference between effective and highly effective practice.

This work is simply too important to keep pushing a square peg into a round hole.

Note: This post originally appeared in the December issue of the NJEA Review.

Value-added modeling is very, very tricky

From NPR Ed, A Botched Study Raises Bigger Questions:

Both student growth measures and value-added models are being adopted in most states. Education secretary Arne Duncan is a fan. He wrote on his blog in September, “No school or teacher should look bad because they took on kids with greater challenges. Growth is what matters.” Joanne Weiss, Duncan’s former chief of staff, told me last month, “If you focus on growth you can see which schools are improving rapidly and shouldn’t be categorized as failures.”

But there’s a problem. The math behind value-added modeling is very, very tricky. The American Statistical Association, earlier this year, issued a public statement urging caution in the use of value-added models, especially in high-stakes conditions. Among the objections:

• Value-added models are complex. They require “high-level statistical expertise” to do correctly;
• They are based only on standardized test scores, which are a limited source of information about everything that happens in a school;
• They measure correlation, not causation. So they don’t necessarily tell you if a student’s improvement or decline is due to a school or teacher or to some other unknown factor;
• They are “unstable.” Small changes to the tests or the assumptions used in the models can produce widely varying rankings.

Read the entire article here.

Using the FIT Teaching™ Framework for Successful Teacher Evaluations

fit-teaching-white-paperThe FIT Teaching™ framework, a coherent approach designed for schools and districts, ensures that high-quality teaching and learning occurs in every classroom, every day. Based on the work of Douglas Fisher and Nancy Frey, FIT Teaching is both a tool for teachers to ensure success for every learner, as well as a resource for supervisors to conduct successful observations and evaluations that support instructional growth. Fisher and Frey have developed a clear and thoughtful framework that, when consistently and thoughtfully implemented, results in success for all. Using FIT Teaching, teachers can show continuous growth in a high-stakes evaluation process; more important, students are provided the opportunity to thrive and achieve.

Download a FIT Teaching white paper that describes the framework and aligns it to the five major teacher evaluation models (Danielson, Marshall, Marzano, McREL, Stronge). Access the paper from the ASCD website here.

Join me for a FIT Teaching workshop in sunny La Jolla, California on December 2-3, 2014. Click for information.

Ensuring Equitable Access to Effective Educators

The US Department of Education recently released a guidance document in an attempt to influence educator effectiveness reform across the US:

In the NY Times article U.S. to Focus on Equity in Assigning of Teachers:

… states must develop plans by next June that make sure that public schools comply with existing federal law requiring that “poor and minority children are not taught at higher rates than other children by inexperienced, unqualified or out-of-field teachers.”

…In an increasingly rare show of agreement with the Obama administration, Randi Weingarten, the president of the American Federation of Teachers, the country’s second largest teachers’ union, welcomed the guidance.

“We’re supporting this process because the rhetoric around this process has changed from ‘Just come up with the data and we will sanction you if the data doesn’t look right,’ ” Ms. Weingarten said in a telephone interview, “to ‘What’s the plan to attract and support and retain qualified and well-prepared teachers for the kids who need it most.’ ”

But other education advocates said they were concerned that the guidance could lack teeth. “The very real risk is that this just becomes a big compliance paperwork exercise,” said Daria Hall, K-12 policy director at the Education Trust, a nonprofit group that advocates for racial minority students and low-income children, “and nothing actually happens on behalf of kids.”

From Edweek‘s States Must Address Teaching Gaps:

 …Key takeaways:

  • At a minimum, state plans have to consider whether low-income and minority kids are being taught by inexperienced, ineffective, or unqualified teachers at a rate that’s higher than other students in the state. That’s not really a new or surprising requirement: It’s something that state were supposed to have been doing the past 12 years under NCLB, which was signed into law in 2002.
  • States aren’t required to use any specific strategies to fix their equity gaps. They can consider things like targeted professional development, giving educators more time for collaboration, revamping teacher preparation at post-secondary institutions, and coming up with new compensation systems.
  • States have to consult broadly with stakeholders to get a sense of the problem and what steps should be taken to address it.
  • States also have to figure out the “root causes” of teacher distribution gaps, and then figure out a way to work with districts to address them. For instance, if a state decides that the “root cause” of inequitable teacher distribution is lack of support and professional development for teachers, it would have to find a way to work with institutions of higher education and other potential partners to get educators the help they need, by hiring mentors or coaches, for example. States can consider the “geographical” context of districts when making these decisions. (In other words, states may want to try a different set of interventions on rural schools as opposed to urban and suburban schools.)

Huffington Post (in What the White House is Doing to Make Sure Low-Income Students Get Good Teachers) adds:

 “The guidance released here — it’s honestly pretty fluffy, it’s just a non-binding plan,” Chad Aldeman, associate partner at the nonprofit Bellwether Education Partners, told The Huffington Post.

And the Washington Post, in  Trying to Get Better Teachers into Nation’s Poor Classrooms, concludes:

Daniel A. Domenech, executive director of the American Association of School Administrators, said the move by the Obama administration is well-intentioned but will have little impact.

“Effective teachers tend to be attracted to districts that pay higher salaries and have what might be referred to as better working conditions,” he said. “This just ignores the whole question of poverty. There seem to be blinders on the part of our policymakers in that they refuse to acknowledge the impact of poverty on our educational system.”

Take Ownership of Your Observations

This article appears in the November issue of the NJEA Review.

Sweaty palms, tongue-tied, worried about every gesture, every word … that’s often how teachers feel during a classroom observation. Sure, it’s easy to say, “Don’t worry—just go about your normal teaching routine!” But we all know that the stakes are high. We all want to put our best foot forward, especially when the boss is watching.

And that’s really the problem—the idea that someone is watching, scrutinizing our every move. In reality, observations done by a skilled practitioner can produce extremely useful data that can help teachers improve in this incredibly demanding profession.

To turn the observation from an inspection into a useful data collection opportunity, both teachers and observers have to make sure that a few key things happen:

  • Teachers should try to relax and continue doing the good job they always do. Putting on a “show” for the sake of an observation is often a recipe for disaster. Even if you think you can pull it off, there’s bound to be one student who announces, “But we never do it this way!”
  • Observers need to continually hone their craft and become skilled at data collection. This requires broadening one’s perspective when in a classroom, tuning in to the nuances of a teacher’s language and instructional moves, listening to student conversations, and noting as much as possible.

Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions.

Once an observation is complete, the data should be shared. Teachers, if you aren’t receiving the data, ask for it. That’s helpful information, because we all know that in the heat of the moment, it’s hard to remember exactly what happened during a lesson. Good, objective data can be a valuable tool and should be used as just that.

Once you receive the observation data, analyze its objectivity.

  • Is the data set a record of statements and/or actions made by teachers and students?
  • Is the data set objective in terms of its description, or are judgment or bias creeping in?
  • Is the data set complete or were important aspects of the lesson not captured?

Remember that the data collection job is not easy. Because it’s impossible for an observer to catch everything, teachers should contribute to the data set. For example, skilled teachers formatively assess in unobtrusive ways. They make adjustments to their lessons based on student responses, and they don’t often announce that they are departing from their plans. It is challenging for an observer to capture these subtle and skilled instructional moves, so be sure to share that information in an effort to round out the data set.

Look for the word “engagement” within the observation data. Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions. Teachers should discuss this with observers and request a more thorough data collection that indicates how the observer arrived at that conclusion. Sharing student work often provides a more accurate depiction of student engagement levels.

I can tell you not to worry—but I know you probably will. The observation process produces anxiety, no matter our level of experience. However, we can try to lessen that anxiety by taking ownership of the process. Participate fully, know the rubrics, contribute data and lead the conversation about instructional practice.