What Have We Learned about Housing Recovery After Disaster Strikes?

The 2017 and 2005 hurricane seasons have many similarities, so we are revisiting findings from evaluations we conducted after Hurricanes Katrina and Rita to summarize what worked in housing people who lost homes in the storms.

Read More >

Categories:

Getting to Reliable Information Quickly: The Role of Random Assignment

Programs that have gradually developed cultures that support rapid cycle evaluations can learn what works quickly and implement continuous improvements.

Read More >

Categories:

Not So Fast: When Indeed Does a Social Program Need an Impact Evaluation?

A social program needs an impact evaluation when the program is ready for one.

Read More >


The Need for Speed: How Rapid Cycle Evaluation Can Help Your Organization Excel

Piloting a new strategy for program administration or service delivery can be informative and support continuous quality improvement. But we should be careful to consider some key questions first.

Read More >


If You Build It, They Will Come: New Evidence on Latino Families’ Use of Early Care and Education

High quality pre-kindergarten programs can provide sizable benefits for young children and families. For Hispanic families, early education can have strong effects on children's school readiness, according to new research.

Read More >

Categories:

Performance Measurement? Proceed with Caution

A theme of the "New Management" is to better measure government performance and, in particular, the performance of individual workers. However, one must carefully weigh the costs and benefits of performance measurement.

Read More >


Data without Design: Don’t Do It!

In a recent blog post, Jacob Klerman and I argued that having administrative data available for answering a question about the impact of a program or intervention won't be successful unless paired with a good research design. Here is an all-too-typical example of why relying on administrative data, even where it includes the primary outcome of interest, is insufficient when a participant's entry into a program cannot be explained.  Since my purpose is general and not about the particular study, I’ve anonymized its description.

Read More >


Sometimes, the Story is in the Subgroups

Program evaluators are often asked to determine whether a given program or program had its intended effects. Getting answers to the question "did it work?" remains the primary goal of many evaluations. But recently, many policy makers and program leaders also want to know "for whom and under what circumstances did the program work?"

Read More >


Want Better Evaluations? First Do This

The ultimate goal of policy analysis is to identify programs that work. Policymakers need to know: Does this program work? For whom does this program work? When does this program work? And would some variant of this program work better? To answer these questions, we need estimates of program impact; i.e., outcomes with the program relative to what outcomes would have been without the program. The "gold standard" approach to estimating impact is random assignment, but other methods are often appropriate.

Read More >


Administrative Data: When Is It Useful for Estimating Impact?

In early October, we had the pleasure of participating in a meeting sponsored by the U.S. Department of Health and Human Services entitled "The Promises and Challenges of Administrative Data in Social Policy Research." Consistent with the title of the meeting, the presentations emphasized both the promise of using administrative data for policy analysis and the real challenges of doing so: getting access to the data, understanding what it means, verifying that it is sufficient for the intended purpose.

Read More >