Essential Research Questions for School Boards

Those who have been running the standards race must persevere and complete the journey toward excellence and equity for our students.

By Douglas B. Reeves
April 2012

Every board member has heard the claims “Research shows that ...” and “Studies prove that ...” But how do educational leaders and policymakers know if the research is sufficient to validate a policy decision?

Here are four essential questions every board member and educational leader should ask when confronted with research claims.

WHAT IS THE INDEPENDENT CONFIRMATION?

Even the best studies require independent confirmation, and leaders should consider five levels of independence when evaluating research.

First, there should be different researchers who are genuinely independent from one another. It is particularly compelling when I find evaluations from two or three researchers who are independent and actually competing with one another. Their incentive might be to find fault with another’s research, but when they reinforce one another, you have solid evidence of independent confirmation.

Second, their studies must gather information independently. Some researchers use random samples while other use “samples of convenience” -- the schools, students, and teachers that happened to be available at the time. Both types of samples are subject to error, and the key is to find instances in which multiple samples confirm the same findings.

Third, researchers should use different methods. My prejudice favors quantitative analysis, but other researchers find great value in case studies, qualitative methods, meta-analysis (studies of studies), or meta-analysis of meta-analyses (studies of meta-analyses). Researchers can quibble about the attributes of their preferred methods, but leaders and policy-makers are best served by considering the preponderance of the evidence -- just as a member of a jury might evaluate different testimonies.

In a court case, you might have eyewitness testimony, statistical evidence, forensic evidence, and expert testimony. While no one witness may carry the day, it is the synthesis of all the available best evidence that will persuade a jury, and the same is true with evaluating multiple sources of evidence from education researchers.

Fourth, consider evidence from different places. Be skeptical of research that fails to provide specifics about the subjects other than to say, “This is an anonymous school system.” Was it urban, rural, or suburban? Was it East Coast or West Coast? Ideally, we should find research subjects from a variety of different places when studies come to similar conclusions.

Fifth, consider different student populations. If a research claim is true for schools where there is a 50 percent poverty level, what about schools where there is a 90 percent or a 10 percent poverty level? What about schools that have 90 percent or 10 percent of students who are English language learners? If the student population is more diverse, it’s more likely that a research claim about a specific teaching and leadership intervention is related to that intervention, rather than to the students’ demographic characteristics.

LABELS OR IMPLEMENTATION?

Too much research depends on labels, such as Professional Learning Communities (PLCs), Differentiated Instruction (DI), or Response to Intervention (RtI). When evaluating competing research claims, leaders and policymakers must distinguish such labels from actual implementation.

For example, four schools may claim to be “doing PLCs,” but they might be implementing professional learning communities of vastly different levels. Some have only a meeting labeled “Professional Learning Community” that is a catch-all for every agenda in the school, while others focus their PLC activities on specific learning, teaching, and leadership goals.

If research focuses only on labels (implementation or no implementation), then it will lack the nuance necessary to inform policy decisions.

ADMISSION OF MISTAKES?

Beware the researcher who is always right. It is tempting for leaders and policymakers to be seduced by the Svengali researcher whose every move seems prescient. A far better criterion is to ask, “When have you been wrong?”

Every credible researcher -- bar none -- will have no difficulty in finding examples of misguided hypotheses and misdirected conclusions. The best researchers will have published those failures and acknowledged them in front of peers and the public. The scientific maxim is that we learn more from error than from uncertainty.

Credible researchers make mistakes and admit them. Discreditable researchers claim perfection and are most fond of making predictions about the future that are immune from empirical testing.

RECORD OF REPLICATION?

Success stories are great, but the ones that are replicable have far more value. Research is littered with personal narratives of the heroic teacher or principal who succeeded against all odds. That’s heartwarming, but of little value to the board member who is asking, “How do I know that this will work for me?”

The only answer to that appropriate challenge is a record of replication -- where the same idea has been applied in different places and different circumstances with consistent degrees of success.

We know research is not perfect. That’s true not only for educational research, but for research in medicine, marketing, and economic policy. The best anyone can say is, “We have looked at many different research claims, and these are the best -- though still not perfect.”

Education leaders and policymakers will be well advised to abandon the search for perfection and instead follow research that meets these four criteria: independence, implementation, candor and admission of error, and replication.

Want more Leadership Resources?

Need more help? Get in touch now.

Previous
Previous

Equity in Grading Self-Assessment (Rubric)

Next
Next

Fact or Fiction