Testing: Too much, too far, too fast

In the NY Times:

“This is the proverbial perfect storm of testing that has hit not only Florida but all the states,” said Alberto M. Carvalho, the influential superintendent of Miami-Dade County Schools, the fourth-largest district in the country, who was named the 2014 national superintendent of the year. “This is too much, too far, too fast, and it threatens the fabric of real accountability.”

Read States Listen as Parents Give Rampant Testing an F

How much Testing is Enough?

NPR Ed:

…the Council of Chief State School Officers and the Council of the Great City Schools, announced the initial results of an attempt to quantify the current state of testing in America.

Their survey of large districts showed students taking an average of 113 standardized tests between pre-K and grade 12, with 11th grade the most tested.

Another recent study by the Center for American Progress looked at 14 school districts. It found that students in grades 3-8 take an average of 10, up to a high of 20, standardized assessments per year. That doesn’t count tests required of smaller groups of students, like English-language learners.

What may be a little trickier is defining just which tests qualify as “unnecessary.” The CCSSO survey describes testing requirements that have seemingly multiplied on their own without human intervention, like hangers piling up in a closet.

They found at least 23 distinct purposes for tests, including: state and federal accountability, grade promotions, English proficiency, program evaluation, teacher evaluation, diagnostics, end-of-year predictions, or to fulfill the requirements of specific grants.

They also found a lot of overlap, with some of these tests collecting nearly the same information.

Read the entire post here.

Value-added modeling is very, very tricky

From NPR Ed, A Botched Study Raises Bigger Questions:

Both student growth measures and value-added models are being adopted in most states. Education secretary Arne Duncan is a fan. He wrote on his blog in September, “No school or teacher should look bad because they took on kids with greater challenges. Growth is what matters.” Joanne Weiss, Duncan’s former chief of staff, told me last month, “If you focus on growth you can see which schools are improving rapidly and shouldn’t be categorized as failures.”

But there’s a problem. The math behind value-added modeling is very, very tricky. The American Statistical Association, earlier this year, issued a public statement urging caution in the use of value-added models, especially in high-stakes conditions. Among the objections:

• Value-added models are complex. They require “high-level statistical expertise” to do correctly;
• They are based only on standardized test scores, which are a limited source of information about everything that happens in a school;
• They measure correlation, not causation. So they don’t necessarily tell you if a student’s improvement or decline is due to a school or teacher or to some other unknown factor;
• They are “unstable.” Small changes to the tests or the assumptions used in the models can produce widely varying rankings.

Read the entire article here.

Using the FIT Teaching™ Framework for Successful Teacher Evaluations

fit-teaching-white-paperThe FIT Teaching™ framework, a coherent approach designed for schools and districts, ensures that high-quality teaching and learning occurs in every classroom, every day. Based on the work of Douglas Fisher and Nancy Frey, FIT Teaching is both a tool for teachers to ensure success for every learner, as well as a resource for supervisors to conduct successful observations and evaluations that support instructional growth. Fisher and Frey have developed a clear and thoughtful framework that, when consistently and thoughtfully implemented, results in success for all. Using FIT Teaching, teachers can show continuous growth in a high-stakes evaluation process; more important, students are provided the opportunity to thrive and achieve.

Download a FIT Teaching white paper that describes the framework and aligns it to the five major teacher evaluation models (Danielson, Marshall, Marzano, McREL, Stronge). Access the paper from the ASCD website here.

Join me for a FIT Teaching workshop in sunny La Jolla, California on December 2-3, 2014. Click for information.

Ensuring Equitable Access to Effective Educators

The US Department of Education recently released a guidance document in an attempt to influence educator effectiveness reform across the US:

In the NY Times article U.S. to Focus on Equity in Assigning of Teachers:

… states must develop plans by next June that make sure that public schools comply with existing federal law requiring that “poor and minority children are not taught at higher rates than other children by inexperienced, unqualified or out-of-field teachers.”

…In an increasingly rare show of agreement with the Obama administration, Randi Weingarten, the president of the American Federation of Teachers, the country’s second largest teachers’ union, welcomed the guidance.

“We’re supporting this process because the rhetoric around this process has changed from ‘Just come up with the data and we will sanction you if the data doesn’t look right,’ ” Ms. Weingarten said in a telephone interview, “to ‘What’s the plan to attract and support and retain qualified and well-prepared teachers for the kids who need it most.’ ”

But other education advocates said they were concerned that the guidance could lack teeth. “The very real risk is that this just becomes a big compliance paperwork exercise,” said Daria Hall, K-12 policy director at the Education Trust, a nonprofit group that advocates for racial minority students and low-income children, “and nothing actually happens on behalf of kids.”

From Edweek‘s States Must Address Teaching Gaps:

 …Key takeaways:

  • At a minimum, state plans have to consider whether low-income and minority kids are being taught by inexperienced, ineffective, or unqualified teachers at a rate that’s higher than other students in the state. That’s not really a new or surprising requirement: It’s something that state were supposed to have been doing the past 12 years under NCLB, which was signed into law in 2002.
  • States aren’t required to use any specific strategies to fix their equity gaps. They can consider things like targeted professional development, giving educators more time for collaboration, revamping teacher preparation at post-secondary institutions, and coming up with new compensation systems.
  • States have to consult broadly with stakeholders to get a sense of the problem and what steps should be taken to address it.
  • States also have to figure out the “root causes” of teacher distribution gaps, and then figure out a way to work with districts to address them. For instance, if a state decides that the “root cause” of inequitable teacher distribution is lack of support and professional development for teachers, it would have to find a way to work with institutions of higher education and other potential partners to get educators the help they need, by hiring mentors or coaches, for example. States can consider the “geographical” context of districts when making these decisions. (In other words, states may want to try a different set of interventions on rural schools as opposed to urban and suburban schools.)

Huffington Post (in What the White House is Doing to Make Sure Low-Income Students Get Good Teachers) adds:

 “The guidance released here — it’s honestly pretty fluffy, it’s just a non-binding plan,” Chad Aldeman, associate partner at the nonprofit Bellwether Education Partners, told The Huffington Post.

And the Washington Post, in  Trying to Get Better Teachers into Nation’s Poor Classrooms, concludes:

Daniel A. Domenech, executive director of the American Association of School Administrators, said the move by the Obama administration is well-intentioned but will have little impact.

“Effective teachers tend to be attracted to districts that pay higher salaries and have what might be referred to as better working conditions,” he said. “This just ignores the whole question of poverty. There seem to be blinders on the part of our policymakers in that they refuse to acknowledge the impact of poverty on our educational system.”

Take Ownership of Your Observations

This article appears in the November issue of the NJEA Review.

Sweaty palms, tongue-tied, worried about every gesture, every word … that’s often how teachers feel during a classroom observation. Sure, it’s easy to say, “Don’t worry—just go about your normal teaching routine!” But we all know that the stakes are high. We all want to put our best foot forward, especially when the boss is watching.

And that’s really the problem—the idea that someone is watching, scrutinizing our every move. In reality, observations done by a skilled practitioner can produce extremely useful data that can help teachers improve in this incredibly demanding profession.

To turn the observation from an inspection into a useful data collection opportunity, both teachers and observers have to make sure that a few key things happen:

  • Teachers should try to relax and continue doing the good job they always do. Putting on a “show” for the sake of an observation is often a recipe for disaster. Even if you think you can pull it off, there’s bound to be one student who announces, “But we never do it this way!”
  • Observers need to continually hone their craft and become skilled at data collection. This requires broadening one’s perspective when in a classroom, tuning in to the nuances of a teacher’s language and instructional moves, listening to student conversations, and noting as much as possible.

Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions.

Once an observation is complete, the data should be shared. Teachers, if you aren’t receiving the data, ask for it. That’s helpful information, because we all know that in the heat of the moment, it’s hard to remember exactly what happened during a lesson. Good, objective data can be a valuable tool and should be used as just that.

Once you receive the observation data, analyze its objectivity.

  • Is the data set a record of statements and/or actions made by teachers and students?
  • Is the data set objective in terms of its description, or are judgment or bias creeping in?
  • Is the data set complete or were important aspects of the lesson not captured?

Remember that the data collection job is not easy. Because it’s impossible for an observer to catch everything, teachers should contribute to the data set. For example, skilled teachers formatively assess in unobtrusive ways. They make adjustments to their lessons based on student responses, and they don’t often announce that they are departing from their plans. It is challenging for an observer to capture these subtle and skilled instructional moves, so be sure to share that information in an effort to round out the data set.

Look for the word “engagement” within the observation data. Noting that students are engaged is not an objective statement. It’s a judgment call, more than likely based on some observed student actions. Teachers should discuss this with observers and request a more thorough data collection that indicates how the observer arrived at that conclusion. Sharing student work often provides a more accurate depiction of student engagement levels.

I can tell you not to worry—but I know you probably will. The observation process produces anxiety, no matter our level of experience. However, we can try to lessen that anxiety by taking ownership of the process. Participate fully, know the rubrics, contribute data and lead the conversation about instructional practice.

Join me at the NJEA Convention

Nuances of Working with the Danielson Model  for Teacher Evaluation

Thursday: 1 – 2:30 p.m.  Room 303
Friday: 1 – 2:30 p.m.  Room 303

The Danielson Framework for Teaching describes a set of knowledge and skills that can be used to help teachers achieve high standards of professional teaching practice. However, a cursory knowledge of the model is insufficient for success. This session will focus on the observable components, highlighting often confusing differences among them. In doing so, teachers will develop strategies for pushing their practice and striving for highly effective instruction.

Teacher Evaluation — Behind the Scenes Work (Professional Responsibilities)

Thursday: 3 – 4:30 p.m.  Room 303
Friday: 9:30 – 11 p.m.  Room 303

No matter which teacher practice evaluation instrument your district uses, they all have standards dealing with Professional Responsibilities  (Stronge Standard 6, McREL Standard 1, Marshall Standard F, Marzano Domains 3 and 4, Danielson Domain 4). These are typically “unobservable” as they describe teacher’s work outside of their interactions with students. This session will explore processes teachers can consider in gathering and analyzing data around their professional practice.

NJEA Convention site.