The federal government increasingly is looking for strategies to identify promising social programs for broad-scale rollout. Recently, there has been a movement toward requiring demonstrated effectiveness in a rigorous impact evaluation (usually random assignment) as a precondition for broader rollout of a program. However, such rigorous impact evaluations have long timelines and high costs. Furthermore, success rates for programs subject to such rigorous impact evaluation are very low.
There appear to be two strategies for increasing the success rate: (i) screen out programs that are likely to fail rigorous impact evaluation; or (ii) improve the programs such that they are more likely to pass rigorous impact evaluation. However, if only rigorous impact evaluation can establish if a program is truly effective, how can we implement these strategies? A recent article in Evaluation Review coauthored by Abt Associates Senior Fellow Jacob Alex Klerman and Diana Epstein of the American Institutes for Research provides a constructive answer to this question. The article suggests an approach based on what the authors call a “falsifiable logic model”—an extension of the conventional logic model commonly used in program development.