When she is a counselor, or a speech therapist, or a librarian, or a coach, or on the child study team … you get the point.
There are many education professionals that work in our schools to support students but don’t “teach” in the traditional sense, interacting with classrooms filled with students. In most districts they are considered “teachers” as part of their employment contract. However, their jobs are not really the same. Most of them don’t interact with large group of students in a classroom setting.
However, their jobs are critically important. And according to teacher evaluation regulations, their job performance must be evaluated using the district’s selected evaluation tool. For many, this is the epitome of trying to force a square peg into a round hole.
Some of the evaluation models in use across the state have job-specific rubrics to accommodate the accountability requirement. For example, the Danielson Framework for Teaching also provides Frameworks for Instructional Specialists, Library/Media Specialists, School Nurses, School Counselors, School Psychologists, and Therapeutic Specialists. These can be found in the 2007 publication of Enhancing Professional Practice: A Framework for Teaching (chapter 5). They are also available from districts using Teachscape as a data collection tool. One final suggestion is to contact the Danielson Group and request job-specific rubrics.
The Stronge Teacher Performance Evaluation System will also provide a separate performance system for Educational Specialists (e.g. counselor, instructional coach, librarian, school nurse, school psychologist, school social worker, and selected other positions). Districts using the Stronge model can request those systems by contacting www.strongeandassociates.com
Marzano districts can contact Learning Sciences International for the Non-Classroom Instructional Support Member Evaluation Form. These are standard issue for any district purchasing materials and software from Learning Sciences.
McREL users are not so fortunate; there are no rubrics for educational services teaching staff. At this point, they typically use their existing instruments. However, individual districts in NJ have created their own rubrics to use in the McREL format. Teachers in McREL districts should contact EIRC and request examples that have been created.
For those in Marshall districts, Kim Marshall suggests contacting a Massachusetts school district that has developed “tweaked” Marshall rubrics 11 other job descriptions. Email Lisa Freedman (LFreedman@Westwood.k12.ma.us) who will share those rubrics that have been created.
No matter the model, it’s important to consider that these important jobs: nurses, counselors, coaches, librarians, therapists, child-study teams, etc., etc. … look very different from district-to-district. The job descriptions may vary, even within one district (consider the difference between high school and elementary library/media specialists). Therefore, all criteria and rubrics must be considered contextually; those educational professionals in “not-a-teacher” jobs must take a careful look at the evaluative criteria to see if it actually reflects their work. If not, teachers should recommend the rubrics be revised to more accurately describe their responsibilities—and clearly indicate the difference between effective and highly effective practice.
This work is simply too important to keep pushing a square peg into a round hole.
Nuances of Working with the Danielson Model for Teacher Evaluation
Thursday: 1 – 2:30 p.m. Room 303
Friday: 1 – 2:30 p.m. Room 303
The Danielson Framework for Teaching describes a set of knowledge and skills that can be used to help teachers achieve high standards of professional teaching practice. However, a cursory knowledge of the model is insufficient for success. This session will focus on the observable components, highlighting often confusing differences among them. In doing so, teachers will develop strategies for pushing their practice and striving for highly effective instruction.
Teacher Evaluation — Behind the Scenes Work (Professional Responsibilities)
Thursday: 3 – 4:30 p.m. Room 303
Friday: 9:30 – 11 p.m. Room 303
No matter which teacher practice evaluation instrument your district uses, they all have standards dealing with Professional Responsibilities (Stronge Standard 6, McREL Standard 1, Marshall Standard F, Marzano Domains 3 and 4, Danielson Domain 4). These are typically “unobservable” as they describe teacher’s work outside of their interactions with students. This session will explore processes teachers can consider in gathering and analyzing data around their professional practice.
Ah, October. The smell of pumpkins, fall foliage, a chill in the air … and a critical deadline that affects all teachers in New Jersey. According to AchieveNJ regulations, a district must annually notify all teaching staff members of the adopted evaluation policies and procedures no later than Oct. 1. If your superintendent hasn’t provided this information to your staff, it’s time to start asking questions.
Your District Evaluation Advisory Committee (DEAC) should have made a series of recommendations to the district superintendent regarding the design of a district’s teacher evaluation system. These decisions go well beyond the selection of a model (Danielson, Marshall, Marzano, McREL, Stronge, etc.). The district should compile all of the policies and procedures related to how it will implement the evaluation process for all staff members so that everyone knows what to expect from their observations and can begin to prepare now for summative evaluations.
For example, do you know the planned timeline of your observations for the year? The regulations require a minimum of one observation per semester (and three for the year if you are a non-tenured teacher). But there are other important timing considerations. Will your announced observation come first (so that you can take advantage of a pre-conference) and your unannounced come later in the year? Will there be a planned gap between observations (giving you an opportunity to reflect on your practice and consider opportunities for growth)? Timelines are important.
Do you know how your district is approaching the behind-the-scenes work of teaching (such as Instructional Planning and Professional Responsibilities)? These cannot be observed (for the most part) during classroom instruction — so how will they be assessed and when? What constitutes exemplary practice in, for example, record-keeping? Teachers need to be aware of these expectations and build a portfolio throughout the school year so you’re not scrambling in May to locate evidence in time for a summative evaluation meeting.
Most critically … what is your district’s approach to creating a summative score based on your yearly observations? Many schools are using software packages that default to a straight averaging method — one that is not conducive to highlighting teachers’ strengths or need for remediation. Will the district use a conjunctive formula (typically associated with the Marzano system)? Or a holistic approach (more often used with the Stronge model)? Perhaps there is a growth-oriented approach (only using the ratings from the strongest observation) or a modality focus. This is one of the more critical DEAC considerations and system design decisions that must be made and communicated to every teacher by October 1. Teachers must know how they will be assessed during their summative evaluation meeting; those conversation should never be a surprise.
Teachers and their supervisors all need to be on the same page. So if the October 1 deadline has come and gone, be sure to ask: What are the district policies and procedures regarding evaluation? A healthy system is one that keeps everyone informed.