The best 2-minute explanation of the difference between weather and climate change from Neil deGrasse Tyson. Perfect for showing students (or adults, for that matter):
Brilliant website on interpreting data … and a great explanation here:
My favorite line: “statistical data can show correlations and then it’s up to us as rational thinkers to establish whether there’s actually a connection between those variables or whether it’s merely a coincidence”
Visit the website here.
In a large-scale analysis of new evaluation systems that evaluate teachers by using test scores (as one element), Morgan Polikoff (University of Southern California) and Andrew Porter (University of Pennsylvania) found little or no correlation between quality teaching and teacher ratings.
Under Race-to-the-Top, the number of states using teacher evaluation systems based in part on student test scores has increased dramatically over the past five years. Many are using those systems to make high-stakes decisions regarding hiring, firing, and compensation.
According to Polikoff and Porter:
Low correlations raise questions about the validity of high-stakes (e.g., performance evaluation) or low-stakes (e.g., instructional improvement) inferences made on the basis of value-added assessment data … the results suggest challenges to the effective use of VAM data. At a minimum, these results suggest it may be fruitless for teachers to use state test VAMs to inform adjustments to their instruction. Furthermore, this interpretation raises the question—If VAMs are not meaningfully associated with either the content or quality of instruction, what are they measuring?
Before moving forward with new high-stakes teacher evaluation policies based on multiple- measures teacher evaluation systems, it is essential that the research community develops a better understanding of how state tests reflect differences in instructional content and quality.
…this study contributes to a growing literature suggesting state tests may not be up to the task of differentiating effective from ineffective (or aligned from misaligned) teaching.
At the very least, these findings indicate a need to slow these implementations down. At best, they suggest (what we’ve known all along): student test scores cannot be meaningfully used to evaluate teachers. Read the entire report here.
The largest archive of history on YouTube. Follow the 20th Century and dive into the good and the bad times of the past. Explore more than 80,000 videos of filmed history .
Subscribe to British Pathé here.
(Cross posted at NJEA.org)
No matter which teacher practice evaluation instrument a district is using, now is the time of year when educators are taking a look at the standard or domain dealing with Professional Responsibilities (Standard 1 in McREL, Domain 4 in Danielson, Domains 3 and 4 in Marzano, and Standard 6 in Stronge).
Causing anxiety for supervisors and teachers alike, this is the “backstage” work of teaching — very little of it can be seen when one visits a classroom to conduct walk-throughs or observations. This is an evaluation area dealing with participating in the professional community, leading and collaborating, and practicing in an ethical manner. For many years, teachers have been evaluated on these criteria in a binary fashion: satisfactory or not. Now, state legislation requires the criteria to be examined and rated on (minimally) a 4-level rubric.
The problem is most of the models for evaluation systems are rather generic when it comes to describing a teacher’s professional responsibilities. In schools where the rubrics have not been further developed to provide concrete local exemplars of effective and highly effective practice, both supervisors and teachers may be perplexed about what constitutes enough data for analysis and exactly what those data represent.
Here are a few DOs and DON’Ts for both teachers and evaluators to keep in mind:
- DON’T make the Professional Responsibilities Standard all about the collection of lots and lots of artifacts. This leads to the “shopping bag” syndrome where teachers have so little guidance, they throw massive numbers of documents into shopping bags to bring to their end-of-year conferences. Or worse, supervisors confuse highly effective practice with enormous quantities of paper.
- DO select several thoughtful and meaningful examples of professional responsibilities that represent a pattern of practice throughout the year and consider how they positively impact student learning experiences.
- DON’T forget that ratings of “highly effective” shouldn’t be unattainable. Keep in mind that the rubrics for most models seek extensive practices, demonstrations of leadership, and meaningful contributions in order to achieve a highly effective rating.
- DO keep these standards in perspective. Both “effective” and “highly effective” professional practices result in positive learning environments for students.
It’s important to keep in mind that there are a tremendous number of teachers that put forth mighty efforts on behalf of their students. When it comes time to evaluate their professionalism, those teachers’ efforts should be acknowledged and honored.
From Google gurus Eric Schmidt and Jared Cohen:
The most important pillar behind innovation and opportunity — education — will see tremendous positive change in the coming decades as rising connectivity reshapes traditional routines and offers new paths for learning. Most students will be highly technologically literate, as schools continue to integrate technology into lesson plans and, in some cases, replace traditional lessons with more interactive workshops. Education will be a more flexible experience, adapting itself to children’s learning styles and pace instead of the other way around. Kids will still go to physical schools, to socialize and be guided by teachers, but as much, if not more, learning will take place employing carefully designed educational tools …