The End of Average

Reading The End of Average by Todd Rose, a fascinating book that argues that standards and standardized assessments are radically outdated.

“Contemporary pundits, politicians, and activists continually suggest that our educational system is broken, when in reality the opposite is true. Over the past century, we have perfected our educational system so that it runs like a well-oiled Taylorist machine, squeezing out every possible drop of efficiency in the service of the goal its architecture was originally designed to fulfill: efficiently ranking students in order to assign them to their proper place in society… (p. 56)

How can a society predicated on the conviction that individuals can only be evaluated in reference to the average ever create the conditions for understanding and harnessing individuality? (p. 58)

… but once you free yourself from averagarian thinking, what previously seemed impossible will start to become intuitive, and then obvious.” (p. 72)

Try Some PISA Questions …

The Programme for International Student Assessment (PISA) is a triennial international survey which aims to evaluate education systems worldwide by testing the skills and knowledge of 15-year-old students. Try your hand at some of the questions here. Some of them might surprise you. Are we preparing our students for these kinds of tests?

CP007-all-road-map-transparent

 

Philadelphia steps in the right direction …

Calling upon the School District of Philadelphia and the School Reform Commission to analyze the financial and human impact of standardized testing, to identify strategies to minimize its use, and to request a waiver of the Keystone Exams from the Commonwealth of Pennsylvania in order to adopt assessments that better serve local needs and priorities.

Read the resolution on the City of Philadelphia City Council website.

Testing: Too much, too far, too fast

In the NY Times:

“This is the proverbial perfect storm of testing that has hit not only Florida but all the states,” said Alberto M. Carvalho, the influential superintendent of Miami-Dade County Schools, the fourth-largest district in the country, who was named the 2014 national superintendent of the year. “This is too much, too far, too fast, and it threatens the fabric of real accountability.”

Read States Listen as Parents Give Rampant Testing an F

How much Testing is Enough?

NPR Ed:

…the Council of Chief State School Officers and the Council of the Great City Schools, announced the initial results of an attempt to quantify the current state of testing in America.

Their survey of large districts showed students taking an average of 113 standardized tests between pre-K and grade 12, with 11th grade the most tested.

Another recent study by the Center for American Progress looked at 14 school districts. It found that students in grades 3-8 take an average of 10, up to a high of 20, standardized assessments per year. That doesn’t count tests required of smaller groups of students, like English-language learners.

What may be a little trickier is defining just which tests qualify as “unnecessary.” The CCSSO survey describes testing requirements that have seemingly multiplied on their own without human intervention, like hangers piling up in a closet.

They found at least 23 distinct purposes for tests, including: state and federal accountability, grade promotions, English proficiency, program evaluation, teacher evaluation, diagnostics, end-of-year predictions, or to fulfill the requirements of specific grants.

They also found a lot of overlap, with some of these tests collecting nearly the same information.

Read the entire post here.

Value-added modeling is very, very tricky

From NPR Ed, A Botched Study Raises Bigger Questions:

Both student growth measures and value-added models are being adopted in most states. Education secretary Arne Duncan is a fan. He wrote on his blog in September, “No school or teacher should look bad because they took on kids with greater challenges. Growth is what matters.” Joanne Weiss, Duncan’s former chief of staff, told me last month, “If you focus on growth you can see which schools are improving rapidly and shouldn’t be categorized as failures.”

But there’s a problem. The math behind value-added modeling is very, very tricky. The American Statistical Association, earlier this year, issued a public statement urging caution in the use of value-added models, especially in high-stakes conditions. Among the objections:

• Value-added models are complex. They require “high-level statistical expertise” to do correctly;
• They are based only on standardized test scores, which are a limited source of information about everything that happens in a school;
• They measure correlation, not causation. So they don’t necessarily tell you if a student’s improvement or decline is due to a school or teacher or to some other unknown factor;
• They are “unstable.” Small changes to the tests or the assumptions used in the models can produce widely varying rankings.

Read the entire article here.

Are tests designed for students or for adult convenience?

Daniel Pink:

Are our education policies designed for the convenience of adults or for the education of our children? Take high-stakes testing—it’s easy, it’s cheap, and you get a number, which makes it really convenient for adults, whether they’re taxpayers or policymakers. But is heavy reliance on punitive standardized tests the best way to educate our children? Probably not.

Read the article in ASCD Educational Leadership here.

Seven Key Takeaways From FIT Teaching

Recognize that wrong answers came from somewhere. Dig deeper to find out where they came from. Teachers feel there’s never enough time to remediate for students when they struggle. But, if we can learn to understand the difference between a mistake (when pointed out, a learner knows what to do next) and an error (when pointed out, the learner has no idea what to do next, thus requiring reteaching), then we can spend precious instructional time where it’s most needed. Did a student simply forget to capitalize the first word of the sentence? Or does that student truly not understand punctuation rules? Teachers can maximize their student interaction time when they spend some time analyzing student responses.

Ask students what they are learning, not what they are doing. This seemingly small change in questioning allows a shift in focus that can help teachers better gauge students’ understanding of content. Rather than asking students what they are doing—that is, asking them to explain a task—teachers should ask students what they are learning—that is, asking students to explain the purpose of a task and how they are learning from it. At Health Sciences High and Middle College, where Fisher and Frey teach, staff—and even visitors—regularly ask about learning rather than doing. Try this change in your classroom to see how it shifts the conversation and helps you to better determine your students’ levels of understanding.

Separate compliance from competence. This has huge ramifications for grading practices. Why do we grade every worksheet, homework, or quiz that students turn in? If we grade everything, we are asking students to be compliant (that is, keep up with the work and you’ll get more points). If we focus on students’ mastery of concepts, however, we’ll send an important message: I’m here to help you learn and will only give you a grade when you appropriately demonstrate your competence in this subject.

Automate responses to recurring events.  Principals are faced with daily demands that take away from the time they can spend in classrooms focusing on the teachers and students. More and more, principals are asking the question, “How can I spend more time focusing on instruction in my building?” The answer is to analyze the systems within your school and create automated responses to recurring events. Buses arriving late? Have a response team ready with a standardized checklist of action items. Cafeteria won’t be open for lunch on time? Plan out the response in terms of personnel and schedule revision. Planning for automaticity in a system means that principals can have more time in the classrooms focusing on the most important aspect of their job: instructional leadership.

Establish the purpose of a lesson. Determine what students should learn and why they should learn it. One of the most important ways we motivate learners is by establishing a good reason for the learning to take place. Without a clear goal, students rightly perceive their work in school as artificial, and this may lead to compliance or even defiance. If we instruct without a purpose—or fail to convey that purpose to our students—then we shouldn’t be surprised when they don’t meet the learning target.

Get kids to produce language, not just hear it. Encourage collaborative work using academic vocabulary. Teachers need to scaffold and assess language needs so students can better access content. The Common Core, in fact, asks for students to use rich academic vocabulary. One of the keys ways to support students in learning language and using academic vocabulary is to ask students to produce and practice using language more. Collaborative work time should be used to have students speak and use academic vocabulary. Fisher and Frey recommend that teachers aim to set aside 50 percent of a lesson for collaborative work. While this will not always happen, teachers should always keep in mind that collaborative work, where students are encouraged to communicate and interact, will help students build language skills and increase academic vocabulary usage.

Remember that the gradual release of responsibility does not have to be linear. Many are familiar with the Gradual Release of Responsibility (GRR) framework, which articulates how responsibility should be turned over from the teacher to the student. Focus lessons (“I do”) and guided lessons (“we do”) place the responsibility on the instructor, while collaborative work (“you do it together”) and independent work (“you do it alone”) put most of the responsibility on the student. There is a common misconception, however, that this framework is linear—that is, that the different types of instruction have to go in order. In fact, good teachers use formative assessment to pick which element of GRR is needed for individual students and differentiate accordingly. Here is an example from Fisher and Frey’s YouTube channel that shows that the GRR framework does not have to be implemented linearly.

As all of these different take-aways show, participants at the FIT Teaching Academy were able to dig into the FIT Teaching model and consider how it resonates with them. The Academy was a full three days of amazing conversations about current practices and how FIT Teaching can provide an integrated and streamlined approach to make teaching more responsive to student needs. Participants left energized, motivated, and encouraged.

Cross-posted at ASCD In-Service.