The attention paid to these issues—and the decision to embrace piloting and learning—will determine whether your organization soars in advancing its mission or stumbles from one not very good solution to another.
At all levels of an organization, the process of executing the agency’s mission should include a continuous search for approaches or ideas that can advance its mission. For example, as program staff enroll eligible clients, conduct assessments, and deliver appropriate services, they should ask themselves: can we do this faster, easier, or more effectively? Effective organizations consciously and steadily tap the best thinking of their people to identify design opportunities with the most promise for program improvement. Effective programs also have new improvement ideas in development all the time, one idea overlapping another.
Contrary to intuition, getting good information about what works in one place or situation does not necessarily translate into reliable guidance on what will work everywhere. When a particular idea becomes pilot-ready, going to the right office or administrative unit to test it will be critical to quick, accurate learning. In thinking about where to pilot, agencies should consider the tradeoff between (a) choosing work units that will be most conducive to the success of the new idea, and (b) other settings that typify the average units that will implement the innovation when it goes into widespread application.
Informative pilots need one of each
of these types of settings to seek both “proof of concept” in the strong setting and information about “generalizability” in the typical setting. Making both of these appraisals (described below) at once,
not consecutively, will greatly streamline the learning and program improvement process.
To understand these two perspectives, suppose Unit A of your organization is the location with the greatest promise for mastering a new program or administrative reform. This may be the place where innovations receive the strongest creative input from implementation staff on the key ingredients for success and the ways around initial challenges. It is the kind of setting where management can learn whether an idea can succeed under the best circumstances. Management should discard the idea and quickly move to testing the next innovation if they don’t see positive results there. However, if success follows under the best of conditions, program administrators have “proven the concept” and can begin working on broader applications.
But suppose, simultaneously, administrators have the foresight to test the same initial idea in Unit B—a unit not especially well-equipped to succeed but more typical of the full sweep of an organization’s offices and staff teams. Simultaneously piloting a change here will allow program administrators to learn more about what can go wrong. And in response, administrators can begin to formulate ways to correct or avoid these problems once the reform is operating at scale.
Failure is very instructive, and a step toward success at a larger scale. When the reform is confined to a single second test site, this learning is something that comes at low cost—much lower cost than a full, agency-wide roll-out based only on the successful results from one advantaged site.
Speed and accuracy are both important. Testing quickly will enable an organization to move rapidly to the next “great idea” if results are negative—or to universally adopt an innovation that tests favorably. But the test findings must be accurate for such a process to lead to continuous quality improvement.
A “with-versus-without” comparison of key outcome indicators will help determine the success of an idea. The best such approach—when an agency is ready for partial roll-out on a substantial scale—uses randomization to determine which work units undertake the innovation and which do not (i.e., the “treatment” and “control” groups). From that point, quickly obtaining measures of success for both groups allows management to see the difference (if any) and act rapidly based on that information.
In short, program administrators who are interested in continuous program improvement are wise to:
Institutionalize an unending search for new approaches and ideas;
Implement pilot tests that provide both “proof of concept” and information on “generalizability;” and
Commit to producing speedy and reliable evidence.
Taken together, these features can help rapidly implement winning ideas and avoid failures. Over time, they will drive better program results, and most importantly solve the challenging family and community problems that social agencies address.
Piloting a new strategy for program administration or service delivery can be informative and support continuous quality improvement. But we should be careful to consider some key questions first.