Policymakers and program implementers want real-time information about their programs to learn whether those programs are working. This information should be of high scientific quality and produced quickly, allowing for rapid program adjustments and improvements on a continuous basis. The best research methods can deliver this type of information.
Implementing a “randomized control trial” (or RCT) is the gold standard for understanding whether program adjustments are producing the desired effects. RCTs disentangle correlation from causation, and can do so rapidly if cleverly designed—in a matter of months, not years. Just because two things seem related, we cannot assume that one caused the other.
For example, suppose some of the offices in a Temporary Assistance for Needy Families agency move from a vendor providing job search assistance to agency staff providing that service. After a short period of time job placement rates go up. But at the same time, the local labor market improves and sanctions are applied for the backlog of clients who are out of compliance with their work requirements. Did job placements increase because of the in-house service delivery model, or as a result of these other factors? If the organization moves from vendor to in-house provision in its other offices, leadership may be surprised when they don’t get the same job results. It’s only once it’s too late to turn back that administrators recognize that they did not correctly identify what made the difference.
How RCTs Can Help Us Learn What Works
What could have been done to avoid this mistake? The agency could have conducted a short-term randomized trial that assigned some recipients to in-house staff for job search assistance while leaving others under vendor direction. This would be similar to a clinical trial for trying out different medical treatments: vary the treatment element randomly among patients while holding everything else constant.
It is vital that one not introduce any other changes that could cause differences in job placement rates. It is also important that other external conditions, such as the local labor market in the example above, align between the two groups—something random assignment will ensure. Indeed, because nothing differs systematically between the two sets of study subjects except two contrasting program approaches, subsequent differences in outcomes can confidently be attributed to the contrasting interventions.
All of this can happen quickly if three inputs undergird the approach:
- Advance thinking and planning for what innovation to test next, even as the previous test is just getting underway;
- Clear identification of one or two short-term indicators adequate to judge success; and
- Quick access to data on those indicators.
Once these principles become embedded features of how an organization innovates, new trials can be brought on line regularly to accelerate learnings and boost an agency’s mission. Systematic testing may add stress to busy program staff and there are risks associated with experimentation (e.g., you may not get the results you are hoping for.) But programs that have gradually developed cultures that support rapid cycle evaluations have been able to learn what works quickly and implement continuous program improvements.
For more about using RCTs to learn quickly about program improvements, visit:
Rapid Cycle Evaluations
Read a related post:
The Need for Speed: How Rapid Cycle Evaluation Can Help Your Organization Excel