Value-added modeling is very, very tricky

From NPR Ed, A Botched Study Raises Bigger Questions:

Both student growth measures and value-added models are being adopted in most states. Education secretary Arne Duncan is a fan. He wrote on his blog in September, “No school or teacher should look bad because they took on kids with greater challenges. Growth is what matters.” Joanne Weiss, Duncan’s former chief of staff, told me last month, “If you focus on growth you can see which schools are improving rapidly and shouldn’t be categorized as failures.”

But there’s a problem. The math behind value-added modeling is very, very tricky. The American Statistical Association, earlier this year, issued a public statement urging caution in the use of value-added models, especially in high-stakes conditions. Among the objections:

• Value-added models are complex. They require “high-level statistical expertise” to do correctly;
• They are based only on standardized test scores, which are a limited source of information about everything that happens in a school;
• They measure correlation, not causation. So they don’t necessarily tell you if a student’s improvement or decline is due to a school or teacher or to some other unknown factor;
• They are “unstable.” Small changes to the tests or the assumptions used in the models can produce widely varying rankings.

Read the entire article here.

Leave a Reply

Your email address will not be published. Required fields are marked *