This page is optimized for a taller screen. Please rotate your device or increase the size of your browser window.

Special Issue Provides Tools to Open Evaluation's ‘Black Box'

insightHow do you get more from your evaluation? In a special issue of New Directions for Evaluation, Abt’s Laura Peck brings together leading evaluation experts to answer key questions in program evaluation and evaluation research.

It’s not enough to know whether a program or policy works, says Abt Associates’ Laura Peck, PhD. Instead, program administrators and policymakers want to know what’s inside the so-called ‘black box.’ Just how does a program or policy make a difference and by how much?

Now program administrators and evaluation students have methods to help them answer these questions.

Peck, a principal associate and scientist, brings together some of the leading scholars and practitioners in “Social Experiments in Practice: The What, Why, When, Where and How of Experimental Design and Analysis,” a special issue of New Directions for Evaluation. In the issue, Peck and fellow authors provide in-depth answers to tough evaluation questions across social science research, including when to conduct experiments.

Over eight chapters, the authors – including Abt’s Howard Rolston, Jacob Klerman, and Stephen Bell – discuss how evaluation design and analysis has changed over the last four decades and how those starting out in the field can carry out experiments that answer important policy questions. Dispelling Myths of Experimental Research One of the key perspectives, Peck describes, is that common criticisms of experiments no longer hold merit. For example, that experiments’ results have limited generalizability is an area of active research, with new design and analytic tools priming experiments to perform better in this regard in the future.

“The control group as we know it is passé,” Peck said. "Program administrators have resisted using experimental evaluations because no one wants to deny people access to help. But the latest designs focus on program improvement. These designs are flexible tests that allow administrators to roll out changes to learn about improvements without denying access, instead testing alternatives of programs to see what works best.”

Peck and colleagues outline these ideas and focus the chapters on concrete ways practitioners can use evaluation to inform government programs at local, state, and federal levels, as well as foundation and nonprofit initiatives. The goal, Peck said, is to enable people to get more from their experiments and improve generalizability of evaluation results.

“Experimental evaluations can illuminate the so-called ‘black box’ now, much more so than in the past” Peck said. “This special issue brings the best advice forward to help the field learn what is working, why it is working, and how it is working.”

The special issue was published by Jossey-Bass and the American Evaluation Association.

Read the special issue.

Read more about Abt’s work in evaluation:

Work With Us
Ready to change people's lives? We want to hear from you.
We do more than solve the challenges our clients have today. We collaborate to solve the challenges of tomorrow.