This page is optimized for a taller screen. Please rotate your device or increase the size of your browser window.

Getting Meaningful Technical Assistance from Webinars: We Can Evaluate That

November 22, 2016
The field of technical assistance, which provides support to programs and organizations to improve the quality of their work, is changing rapidly. Many organizations that provide national and local technical assistance have moved toward the use of “virtual” technical assistance (TA) or program support. Much of that low-intensity support is delivered via webinars.

During most webinars, the TA provider does not see the audience and their interaction with the audience is limited to poll question responses or questions posed via a chat box function. Moreover, they have little information about the audiences’ levels of engagement (e.g., are participants listening intently, or is the webinar background sound while participants multitask), or the relevance or usefulness of the information (e.g., are participants nodding in agreement, or raising eyebrows in bewilderment). This virtual engagement can leave the TA provider in the dark, not knowing whether the webinar resonated with the audience or achieved its outcomes. In this blog, I offer some suggestions for TA providers who are interested in evaluating webinar-based TA.

1. Move Beyond Outputs

Most webinar-based TA evaluations focus on relatively easy-to-measure outputs, such as the number of people registered and the number of attendees. Some even go a step farther and ask some “customer satisfaction” questions such as the extent to which participants enjoyed the presentation, found it worth their time, or thought it was high quality. While this information, particularly participation calculations, is helpful, it is not sufficient in this evidence-based world to justify the continuation of webinar based-TA. Instead, TA providers should develop strong immediate outcome measures whenever possible.

2. Develop Strong Immediate Outcome Measures

Having relevant, clear and valid measures of immediate outcome change is vital to the rigor and accuracy of your evaluation. I recommend the use of “true” knowledge measures like an academic test, rather than perceptions of knowledge change.  For example, ask “What percent of low-income children reside in a two-parent family?” rather than “How much do you think you’ve learned about low-income children’s families as a result of this webinar?” Additionally, ask questions that are difficult enough to reduce the chance of lucky guessing; I’ve found that true/false items are easily guessed.

Finally, make the measures answerable even if participants don’t know your specific jargon or program speak. In many human service programs, we use mnemonics or cute phrases to ease knowledge retention (think of “My Very Excellent Mother Just Served Us Nine Pizzas” to remember the order of planets in our solar system). Measures should test for the presence of the underlying knowledge and not for whether a participant knows what a “shortcut” stands for.

For example, a popular healthy relationship program recommends that couples have a “talking stick” when discussing sensitive topics to ensure that only one person speaks at a time. A good knowledge measure would capture whether participants understand the importance of couples taking turns talking and listening, rather than the meaning of the talking stick. This allows for knowledgeable individuals to provide the correct answer at pre-test, even if they don’t know the cute or catchy way a curriculum is teaching the concept.

3. Use Your Logic Model

In all likelihood, you will not be able to identify a comparison group for a webinar-TA evaluation; the evaluation will be a single treatment group design.  Given this, it is imperative that when planning the actual webinar content and the evaluation, you follow a strong logic model and define your learning objectives. What specifically do you want to achieve as a result of the webinar and what is the path by which those achievements will lead to individual, organizational or systems-level changes? Build your evaluation to capture those immediate outcome changes that are likely to lead to longer-term, meaningful change. Immediate outcomes likely to result from TA are usually categorized as changes in knowledge, attitudes, opinions or behavioral intentions.

4. Harness the Power of Your Webinar Platform

Achieving sufficient response rates through virtual evaluation methodologies is sometimes challenging. I recommend leveraging your webinar platform’s functions to collect data. Suggestions include:
  • Collecting mandatory pre-tests at webinar registration;
  • Using responses to pre-test items as a requirement for admittance from the virtual “lobby” into the webinar;
  • Informing participants that slides, recordings and/or other resource materials will be emailed to post-test completers;
  • Allowing post-test completers to ask questions verbally rather than through a chat box; and
  • Letting participants’ project officers know which programs completed the evaluation.

Using these tips along with established evaluation best practices can help us understand when, on what topics, and under which circumstances low-intensity virtual TA is effective. Over time, such information will greatly strengthen the technical assistance field and guide the selection of various methodologies to effectively and cost efficiently address the goals of technical assistance.
Work With Us
Ready to change people's lives? We want to hear from you.
We do more than solve the challenges our clients have today. We collaborate to solve the challenges of tomorrow.