Abt Associates: Bold thinkers driving real-world impact
Governments and non-profits around the world are looking for ways to identify social programs that work and replicate them broadly. The question of how to assess these programs – and the latest methods for doing so – was at the heart of a day-long forum in Washington, D.C., hosted by Abt Associates and the Association for Public Policy Analysis and Management (APPAM).
The forum, held April 24, explored social experiments in practice and looked at how recent methodological advances can address real-world issues in the United States and abroad. Speakers examined how and when experimental evaluations can help program administrators and policy-makers evaluate whether a particular policy worked, what elements were effective, and how to ensure that evaluation results are relevant for policy and in the right settings. The event drew attendees from government, academia and think tanks.
Naomi Goldstein, director of the Office of Planning, Research and Evaluation in the Administration for Children and Families at the Department of Health and Human Services, served as the event’s moderator and underscored the importance of experiments as a “central tool for learning about and improving social policies and programs.”
“A random-assignment experiment allows us to observe not only the outcomes for people who are exposed to a particular intervention or service, but also the outcomes that would have occurred in the absence of that intervention,” she said.
The first session, which included Howard Rolston and Jacob Klerman of Abt Associates and Tom Cook of Northwestern University, addressed why social experiments are a valuable tool to assess policies and the best time to undertake a randomized, rigorous impact evaluation. Rolston, who co-authored the book “Fighting for Reliable Evidence,” outlined lessons learned in overcoming barriers in nearly five decades of social experiments.
Abt Senior Fellow and Evaluation Review Editor Jacob Klerman examined at what point a program is ready to be evaluated, and strategies to conduct a process evaluation of a program to ensure its preparation for rigorous impact evaluation. Cook responded to these two presentations with a discussion of alternative quasi-experimental evaluation designs, urging researchers to better understand the circumstances when results from non-experimental studies can be reliable.
View the presentations from this session
The second session, which included Rob Olsen of Abt Associates, Elizabeth Stuart of Johns Hopkins University and Demetra Smith Nightingale, Chief Evaluation Officer for the U.S. Department of Labor, explored the limitations of multi-site social experiments where the sites are not nationally representative, thereby limiting how useful localized studies can be for national policy. Olsen and Stuart discussed both design and analytic approaches to address those challenges. Nightingale responded with ideas for how government funders can work with grantees to aid in the goal of providing generalizable and policy-relevant evaluation results.
View Rob Olsen’s presentation
View Elizabeth Stuart’s presentation
Read an interview with Rob Olsen on limitations of social experiments conducted in non-randomly selected sites and ways to overcome those limitations
Iqbal Dhaliwal, deputy director of the Abdul Latif Jameel Poverty Action Lab (J-PAL), addressed challenges and solutions to conducting social experiments internationally, and shared lessons learned and anecdotes about how the realities of such experiments play out on the ground. He also compared international development evaluation practice with the common U.S. approach to highlight many ways in which researchers in both spheres might learn from each other moving forward.
In the afternoon panel, professor Larry Mead of New York University and Laura Peck and Stephen Bell of Abt Associates tackled the “how” of social experiments and ways to identify “impact drivers” — the various features of social program interventions that cause favorable impacts or make them greater in magnitude. Representing implementation, evaluation design and analytic perspectives, they examined how cutting-edge evaluation methods are helping researchers learn more about what works in social policy. Forum moderator Naomi Goldstein followed with remarks that highlighted the similarities in purpose of the presentations’ distinct perspectives, noting how government funders and policy-makers rely on this kind of evaluation research.
View a webcast of the discussion
Download the presentations
Read an interview with Stephen Bell and Laura Peck on getting better guidance from policy evaluation
In closing the forum, professors Larry Orr of Johns Hopkins University and Rebecca Maynard of the University of Pennsylvania discussed practical tips for conducting social experiments in a timely, cost-effective manner and meeting the demands of evaluation needs in the future.
Reflecting on the forum, Mead said “The room crackled with the sense that we were addressing crucial problems in policy research, and also in policymaking.”
Abt Associates is a mission-driven, global leader in research, evaluation and program implementation in the fields of health, social and environmental policy, and international development. Known for its rigorous approach to solving complex challenges, Abt Associates is regularly ranked as one of the top 20 global research firms and one of the top 40 international development innovators. The company has multiple offices in the U.S. and program offices in more than 40 countries.