A Science-based EdTech Evidence Evaluation Routine

On the long journey from a classroom idea to a scalable app, what's the true measure of success? For Edtech startup founders, who have pioneered solutions driven by a desire to make a difference to children’s learning, this question can be both perplexing and pivotal.

Picture this: You've meticulously crafted an app targeting a child’s learning of phonics, born from your own classroom experience of what is needed. You've invested personal funds, tested it in some focus groups with children and a series of A/B tests, and now you're gearing up to scale. But as you approach venture capitalists and government grant programs for support, you encounter a recurring question: "Where's the evidence that your app makes a positive impact on learning?" What are the actual effects in terms of established educational metrics?

The post-COVID era has brought into focus the stark reality that many Edtech tools, once hailed as breakthroughs, have fallen short of their promises. Reports unveil their negative impact on learning outcomes and mishandling of children's data. In response, educators and researchers are demanding rigorous testing based on scientific principles before EdTech reaches the hands of children. Positive reviews from a few enthusiastic early adopters are no longer sufficient.

Beyond the Buzzwords

So, what does "evidence" truly mean? Is it about impacting teachers, parents, students, or the whole community? The complexity deepens when considering which group of children benefits the most. In this dynamic landscape, a one-size-fits-all framework falls short. Edtech's pace of evolution demands a living, adaptable approach to evidence, constantly aligning with advancements in both technology and science. This embodies the essence of the EdTech evidence movement - a vision for an EdTech industry that enhances all children's learning through better technology.

Yet, in practice, making this work hinges on incentives. When funders back evidence evaluation financially, companies get motivated. Government demands, like the USA's ESSA standards, also drive companies to prove their evidence. Without external pushes, it's up to the companies to value science-backed impact.

Our new paper introduces the “EVER routine” to support this process. EVER taps into founders’ motivations and orientation towards evidence and existing evaluation frameworks. EVER intentionally maintains a broad scope, allowing EdTech founders, developers, investors, researchers, and auditors to adapt it to their needs.

Several stakeholders - governments, venture capitalists, research groups, international organizations, and even clearinghouses - forge their unique frameworks, each setting their criteria for evaluation. The result could soon be an evidence industry cluttered with frameworks, rubrics and evaluation criteria, causing confusion. EVER is not another rubric but a flexible routine, in sync with stakeholders, including scientists.

While Edtech vary in terms of the outcomes and markets they target, the core of EVER's evaluation routine remains consistent: How does your technology create a positive impact on children and the world, regardless of the diverse evidence measures? As such, EVER promotes a variety of methods, bridging health sciences' rigorous RCTs with human-centered qualitative assessments in Edtech. This means that startups can navigate international evidence demands, along with local variations and funder requirements, while utilizing EVER as an umbrella framework that guides their evidence efforts.

EVER has been openly published in a top Learning Sciences journal under the Nature Portfolio, offering users the confidence of a scientifically supported evaluation routine.