Abt Associates and partners helped federal education innovation grantees learn from their successes and failures by sticking to the original evaluation research questions.
In the last decade, Abt Associates has facilitated the design and execution of more than 200 evaluations examining the effectiveness of innovative programs in education.
This includes, notably, Investing in Innovation (i3), a signature U.S. Department of Education-funded competitive grant program created in 2010 that funds school districts’ and nonprofit organizations’ efforts to improve student outcomes.
To date, the i3 program has awarded more than $1.4 billion to 172 grantees to scale-up, validate, or develop evidence-based education programs. In addition, each i3 grantee is required to fund an independent evaluation to document the implementation and outcomes of the projects.
Abt Associates and partners helped 172 Investing in Innovation (i3) grantees design and execute evaluations of their efforts to improve student achievement. Abt Associates and its partners — Chesapeake Research Associates, ANALYTICA, Century Analytics, Measured Decisions Inc., and Westat — have helped the evaluators of the i3-funded programs design and implement high-quality evaluations. We also summarized their evaluations’ strength of evidence and findings to learn which programs, practices, and approaches held the most promise for improving student outcomes.
Nearly all i3 Grantees Report All Findings
Generally, evaluators – and funders – place more value on positive findings – results that show a particular intervention is effective. But looking at only the positive findings doesn’t tell the program’s entire story, said Beth Boulay, leader of Abt’s i3 evaluation technical assistance and a principal associate for Social and Economic Policy at Abt.
“To get a better picture of each innovation, we asked the i3 grantees to specify key aspects of their research questions in advance of each study,” Boulay said. “And they responded. The grantees provided answers for 97 percent of the research questions posed.”
Boulay said the null and negative findings provided grantees significant food for thought.
For example, the Achievement Network (ANet) – an organization dedicated to increasing educational equity in America – found their approach of using standards-based planning and interim assessments of students was not as effective some schools than others.
ANet dug deep into the evaluation data and identified 'readiness conditions' that distinguished schools that benefited from ANet from those that did not. The lesson? Schools, just like students, are not all the same and require customized approaches.
“These findings are helpful to many organizations working to help teachers and schools improve outcomes for kids, and are a terrific example of what we can learn when we don’t just focus on positive findings.” Boulay said.