Abt Associates: Bold thinkers driving real-world impact
Recent methodological advances in measuring the effectiveness of social programs was the subject of a day-long forum that I attended in Washington, DC. in late April. The forum—sponsored by Abt Associates and the Association for Public Policy Analysis and Management (APPAM)—highlighted new ideas on how to evaluate whether a particular policy or program works. And even though housing wasn’t explicitly the focus of the forum, there were insights from Abt Associates researchers and other forum presenters applicable to housing research. Here are some of them.
What follows are three of the ideas I heard and my thoughts on how they apply to evaluations of housing programs. Watch for a review of other ideas in Part 2 of this column in the next issue of At Home.
Idea: A recent increase in program funding allocated through competitive grants rather than by formula should make it possible to build evaluations into the initial implementation of the program by requiring participation in an evaluation as a condition of receiving the grant. (Naomi Goldstein, Director of Evaluation at HHS’s Administration for Children and Families, and Demetra Smith Nightingale, Chief Evaluation Officer of the U.S. Department of Labor)
Relevance to housing research: Like other agencies, HUD implementers of new programs usually are under enormous pressures – to issue the notice of funds availability, select grantees, and get the funds spent. The evaluation then tries to catch up with a train that has left the station. While these pressures are understandable, evaluation should be built into the program’s initial Notice of Funding Availability (NOFA) so that grantees have a clear idea of what is expected of them and essential data can be collected from the outset. Some of the costs incurred by grantees to set up the evaluation and collect data should be built into grant budgets. Congress can help, by adjusting the timeline to allow for initial steps in designing the evaluation and by making participating in the evaluation an allowable—and expected—use of funds by grantees.
Idea: Evaluations that randomize participants into treatment and control groups are the only way to determine conclusively whether a program is effective by measuring it against what would have happened in the absence of the program. But frequently new programs are not ready for such an evaluation. Randomized controlled trials often show that programs have no impact, leading many evaluators to conclude that the program may not have been implemented as intended or may not have had a clear logic model. New programs should go through a pilot stage during which they are tested against a set of hypotheses about the activities, outputs, and intermediate outcomes that are necessary for a program to have its intended effect. (Jacob Klerman, Abt Principal Associate and Senior Fellow)
Relevance to housing research: Piloting new program concepts can be especially challenging for the housing field, because of the flexibility and local control of program goals and designs that are at the heart of many federal community development programs. A possible solution is not to try to evaluate a national "program" but instead to select a small set of local grantees at which to conduct pilots of new program approaches that could then be evaluated once they have passed the test of a process evaluation and are otherwise mature enough for an impact evaluation.
Idea: Some programs do not lend themselves well to randomization of individuals or households to treatment and control groups, either because the program is supposed to affect an entire community or because members of a control group cannot be isolated from the information or incentives directed to the treatment group. Randomization of sites rather than individuals is an increasingly popular and feasible approach and is often used in the education field. A large number of sites are needed under this approach. (Howard Rolston, Abt Principal Associate)
Relevance to housing research: Randomization by sites implies a fair amount of national control over a program model to be tested at a large number of sites.That level of national control probably is not feasible for programs already implemented with substantial flexibility at the local level. It might, however, be feasible for a new grant program. Authority to subject grant applicants to a lottery, in which equally high-scoring applicants are chosen at random to implement different program models (or in which "controls" are funded only to participate in the evaluation), would need to be built into the legislation that creates the program, overriding provisions of the HUD Reform Act that create narrow parameters for grant competitions.
Click here to read past issues of the At Home newsletter.
Abt Associates is a mission-driven, global leader in research, evaluation and program implementation in the fields of health, social and environmental policy, and international development. Known for its rigorous approach to solving complex challenges, Abt Associates is regularly ranked as one of the top 20 global research firms and one of the top 40 international development innovators. The company has multiple offices in the U.S. and program offices in more than 40 countries.