If the Science of Learning is to effectively inform teaching and policy, researchers also need to point out ineffective approaches.

Discussions of education tend to place considerable emphasis on “evidence-based” or “evidence-informed” education. The idea is that educators and policymakers should choose approaches that have been shown through empirical research to affect student outcomes in a positive way.

Such evidence is often, though not exclusively, collected by means of rigorous and carefully controlled experimental studies called randomized controlled trials (RCTs), in which students are randomly assigned to two different groups. Each group uses a different curriculum, and the groups are compared by assessing the students’ performance before and after the curriculum is implemented. For comprehensive reviews of such trials, see work by the US Department of Education’s What Works Clearing House and UK Education Endowment Foundation.

However, far less attention is paid to the educational trends that have been scientifically proven not to be effective. Studies are generally considered to be successful and therefore worthy of publication and sharing with the wider public if they demonstrate that a particular curriculum or educational intervention works better than the mainstream approach. Should we be looking only for things that move the needle positively when it comes to education? Do we care too much about “what works”? Are we biased against studies that demonstrate that a new educational approach does not, in fact, work for students?

“In order to improve education, it is important to carefully analyze and publicize educational approaches that prove to be ineffective.”

If you browse through the pages of most scientific journals you will find far more positive results (showing that one intervention worked better than another) than negative results (showing that a new intervention was no more effective than “business as usual”). This phenomenon is known as “publication bias”: Empirical reports of positive results are more likely to be published than reports of negative findings (no effect). This bias runs the risk of ignoring negative research findings.

However, we need to know not only what works, but also what does not. In order to improve education, it is important to carefully analyze and publicize educational approaches that prove to be ineffective. Often new, snappy concepts that appear easy and cost-effective to implement spread like wildfire in schools and rapidly find their way into professional development materials for educators. Once this happens, it is hard to change course. Because of publication bias, studies with small sample sizes and inflated effect sizes often set a trend. When larger studies fail to replicate their findings, the results of those subsequent studies are much less likely to reach teachers than the appealing initial study.

“Often new, snappy concepts that appear easy and cost-effective to implement spread like wildfire in schools and rapidly find their way into professional development materials for educators.”

There are many recent examples of highly popular educational approaches that have been shown not to work or to produce only negligible effects. These include very popular constructs such as Learning Styles, Working Memory Training, Mindset interventions, and Grit – the list goes on. (For more, see responses to a recent question I posed on Twitter.)

Not only should we stop using ineffective educational approaches; we also need to use the information we gather about them to prevent their proliferation and abuse by those seeking merely to make a profit. Knowing what not to do will “inoculate” educators against ineffective approaches, thereby reducing inefficient use of instructional time and promoting the search for proven alternatives. Working with effective communicators will ensure that educators are made aware of what works and what doesn’t. In addition, we need to ask questions about the size of any effects and recognize that some effects may be ambiguous rather than clear-cut. There is a tendency to cling to the initial findings, despite the presence of strong conflicting information.

“Knowing what not to do will ‘inoculate’ educators against ineffective approaches, thereby reducing inefficient use of instructional time and promoting the search for proven alternatives.”

Paying attention to unsuccessful replications of prominent effects may initially be disappointing, but attending to and acting on such results will allow us to consider and design more efficacious, evidence-informed approaches. Understanding what doesn’t work will tell us how not to spend instructional time and which products should not receive public funding.

Leave a Reply

Your email address will not be published. Required fields are marked *