Sound Evidence: An Expert Discussion on COVID-19 Research

In our Sound Evidence podcasts, Abt experts take a closer look at the work we’re doing to help people move from vulnerabiliy to security. Sign up for monthly e-mail notifications here. Catch up with previous podcasts here.
Sound Evidence: An Expert Discussion on COVID-19 Research
In this new #SoundEvidence podcast, Abt Researchers Laura Edwards and Meredith Wesley delve step-by-step into the details of how we are implementing the CDC-funded RECOVER Study of SARS-CoV-2 incidence, first among healthcare personnel and first responders and then among other essential frontline workers. We surveyed participants about their symptoms and environment (e.g., if they used personal protective equipment) and asked them to self-administer nasal swabs weekly to capture both symptomatic and asymptomatic infections. When the vaccines came out, we added the task of tracking effectiveness—how good the vaccines are in the real world versus their efficacy in perfect clinical trial conditions.
Be sure to check out our other Podcasts which Abt experts dig into the issues facing their field—and explore new ideas and solutions.
For more on this topic, listen to:
- Episode 22: Breathing Easer: Emission Data, Environmental Justice and Child Health
- Episode 20: Data and Racial Equity: How Do We Eliminate Bias from AI, Machine Learning, and More?
Read the Transcript
Meredith Wesley: Hi everyone. I'm Meredith Wesley, a senior analyst in the Abt Associates Division of Health and Environment. And I'm joined today by my colleague and friend Laura Edwards, Laura.
Laura Edwards: Hi, my name is Laura Edwards and I'm an associate at Abt Associates. And today we're going to be talking about the CDC funded RECOVER study. RECOVER stands for Research on the Epidemiology of SARS-COVI-2 in essential response personnel. And I am the study lead for this study. I've been in this role for a little over a year now. And Meredith also has been on the study since day one as the data lead for Abt.
Meredith: RECOVER is a network study that we have six partners across the U.S in six different geographic locations and the study was originally intended to look at the incidents of SARS-COVI-2 infection among highly exposed populations, more specifically first responders and healthcare personnel.
So first responders are folks in the fire service, police departments, EMS, et cetera and then healthcare providers are any individual who provides hands-on medical care in a healthcare setting.
Laura: We started designing this study in collaboration with CDC in May of 2020 and began recruitment and enrollment of study participants at our six study sites at the end of Summer 2020. And right now this is anticipated to be a study that goes until Spring of 2022. So that makes it a cohort study with a longitudinal design. And we originally started with a target sample size of 2000 healthcare workers and first responders. But over time, that sample size has grown and our targets have grown along with it.
As Meredith mentioned, this study started off as an incident study, trying to understand the rates of both symptomatic and asymptomatic infection with SARS-COVI-2 in healthcare workers and first responders.
Meredith: As Laura mentioned, RECOVER is a prospective cohort study, which means that once a participant is enrolled, we follow them over time. So since recruitment started in August of 2020, a participant may have enrolled in August of last year and may be followed all the way through the end of data collection at the beginning of 2022. Participants are first recruited and of course, consent to this study, and enroll. And there is a series of questionnaires that the participants take every several months, starting with an enrollment survey, where we collect information on demographics and occupation information and some different information on exposures, things like PPE use and a few other factors directly related to the exposures they might have in the workplace and the way that those exposures might change over time. Aside from the surveys, we also collect blood specimens on participants at several time points during this study, of course, to look at different immune markers and antibody responses at different points across time.
And then perhaps most importantly, is our study of incidents. We have a pretty robust text message based illness surveillance platform. As a part of that illness surveillance, we collect information each week on whether or not every participant is experiencing a set of COVID-like illness symptoms. We have a study case definition that includes eight COVID-like symptoms. And we also asked participants to self-collect a nasal specimen and ship it off once per week to our partner laboratory in Marshfield, Wisconsin, where the lab conducts PCR tests for SARS-COVI-2 on all of those samples. So participants are getting pretty real-time weekly tests for COVID during the duration of the study. And we use the data, specifically, in order to map out the number of new incident cases over the amount of time that participants are followed in the cohort.
Laura: Our study is fairly unique in that, like Meredith said, we are asking each of our participants to self-collect and ship a nasal swab for testing every single week. And they're doing that on a pre-assigned day of the week. So if you're a Monday person that means every single Monday for up to a year and a half, you're collecting this nasal swab and sending it in. And that's regardless of how you feel, if you feel ill, you collect that on Monday, or if you feel perfectly fine, you also collect it on Monday. And then if you feel ill at another time during the week. So any other day, but Monday, in this example, you can also collect that nasal specimen and send it in. And so we're able to test every single participant in this network, every single week. And that helps us determine who has symptomatic COVID infection but also those participants who never show symptoms or maybe test positive for SARS-COVI-2 before their symptoms even appear. And that's a pretty unique feature of this network. It's complicated and expensive to do but provides really valuable information in terms of incidents or rates of SARS-COVI-2 infection.
Because we had this platform with the weekly specimen collection going for several months, by the time that vaccines were starting to roll out at the end of 2020, CDC approached us and asked if we could transition our platform from studying just the incidents of SARS-COVI-2 to also studying the effectiveness of the vaccines. So there's a difference between vaccine efficacy and vaccine effectiveness. Vaccine efficacy is the data that came out in early November for the Pfizer and Moderna vaccines. That was the result of the clinical trials that took place mainly over the summer and into the early fall of 2020. Vaccine efficacy looks at how potent or effective these vaccines are under very controlled clinical trial settings.
That means that the cold chain was guaranteed for all of these vaccines that have to be kept at very specific temperatures. The trials could guarantee that that was kept intact and also that, because these two vaccines in particular have a two-dose schedule under the clinical trial settings, the participants did get their dose one and dose two perfectly timed apart. But we know that that doesn't always happen in the real world. And so vaccine effectiveness looks at how helpful these vaccines are at preventing SARS-COVI-2 infection under real world conditions. Our platform was one of the best cohort studies to study this because we already were collecting specimens every week on our 2000 participants.
Meredith: I want to revisit a point that we had started talking about at the beginning of our chat about the initial population that we set out to recruit and enroll into the study, which as I mentioned, was healthcare providers and first responders. Anyone who's been following along with the vaccine rollout here over the last couple of months will probably notice something very specific about those two categories of workers, which is that they were both toward the top of the list of prioritization for receiving the vaccine once it did become available as decided or recommended by the ACIP [Advisory Committee on Immunization Practices] and then recommended, ultimately, by CDC. So this created a bit of an issue when we did try and make the transition to wanting to studying vaccine effectiveness because these groups were expected to get the vaccine very early.
So once we transitioned our protocol and poised ourselves to be able to answer this question about the vaccine effectiveness, we didn't have to consider changing our target population for this study in order to include a new group that would be a bit lower on that list of priority recipients for the vaccine. And that's because as Laura and I began talking about, in order to study vaccine effectiveness, you need to have something to compare the vaccinated participants to. So you need, essentially, an unvaccinated control group in order to say, "This group got vaccinated. This is the number of infections that we observed during the period. And that is compared to the number of infections that we observed in this unvaccinated population followed during the same period." And so to add that component into the study, we pivoted in late fall of last year to include a third occupational category into the study design. And those are essential frontline workers.
Laura: Essential frontline workers are basically anyone whose job is critical to the functioning of the economy but can't do their job from home. So it's people who, by the nature of their job, are exposed to SARS-COVI-2 because they are out in the world, they can't do their work remotely. And so that could be anything from grocery store workers to postal office workers, custodians, and janitors and teachers who are doing in-person as opposed to remote learning. So it is a pretty broad category. And at the point in November, 2020, when we decided to expand our study and include this third occupational category, we upped our sample size from 2000 participants to almost 3,500 participants.
Meredith: When this idea was first proposed to the network, we had quite an extensive number of conversations about the best way to collect the vaccination data since there is not currently a federal-level registry that tracks vaccination information, even for COVID though, I think there have been some ideas thrown around that perhaps, especially after this last year, there would be interest in and utility in having a federal level of vaccination registry. There currently is not one. The states had varying degrees of implementation in terms of vaccine registries before the COVID-19 pandemic. And then there has been, I think, a revived interest in building some of those platforms out. But at the time that we were looking to start collecting this information, which was immediately, in mid-December, as soon as the first vaccines went into arms across the country, we didn't have a one-stop shop where we could go and abstract this vaccination information.
And so we ended up taking a bit of a hybrid approach across the six study sites. At Abt, we created several different avenues for sites to collect and provide this information back into our study databases. Some of our study sites were specifically selected as sites because they have access to EMR [electronic medical records] data on many of the participants in the cohort. So these are people who, obviously, consent to have the study abstract certain information from their medical record. And in some cases, vaccination information would be readily available there. So an abstraction exercise would get us what we need in a very timely manner.
For sites that didn't have that option, or for participants who might not have provided access to their medical record to this study, we did have a few sites who are able to access occupational health records, again with, of course, the consent of the participant. But we had, especially for healthcare providers, workplace-level vaccination campaigns going on. And so information would be available in those HR [human resources] records.
And then for sites who might not have had access to either of those options, we did provide a survey that went out via text message as well to all of the participants for the opportunity to self-report their vaccination status to us: if at the time they received the survey, they hadn't yet gotten the vaccine or reported that they weren't intending to get the vaccine. We used that information to drive the timing of the next iteration of that survey that they would receive. So we were, every eight weeks or so, checking in with people to see because, of course, this is a big decision for a lot of people. And perhaps in December, someone might not have been ready to take the vaccine. And when we check in again come February, maybe they changed their mind or maybe they've gotten the first dose by that point.
So we did have the self-report option available. And as part of that, I think, very widely known at this point that once you get the COVID vaccine, CDC had provided these vaccination cards, basically like an individual record that was particularly useful for tracking the timing of your first dose and your second dose. It also had some information about lot number and the location that you received the vaccine, et cetera. And so as a double-check or corroboration, I guess, of the self-reported vaccination status, we did ask participants if they had kept a copy of that vaccine card. We gave them the opportunity to provide a copy to the study.
Laura: Each state has a slightly different registry and different timeliness of uploading and completing the data within their registry. And as you can imagine, when these vaccines first started rolling out in December, there were some kinks to work out in those processes. So it wasn't the most reliable data, especially during those early few weeks of rollout. It's significantly improved, and the process is a lot more streamlined and reliable now. And so we're able to also go back and verify vaccination within those state-level immunization registries.
Meredith: And part of our work here at Abt was to be receiving this information, the input from each of the sites, and then before delivering an analytic data set or the product, really, to CDC, it was our responsibility to go through and have double checks to confirm dates of vaccination, confirm products that participants received. We were collecting manufacturer level information on different products and then to going through and making sense of these different sources, resolving any discrepancies, of course, with the very close help of all of our site partners. And then ultimately to put that information into a final data set that was ready for these vaccine effectiveness analysis that were conducted by the CDC. So it was a fast and furious effort over the winter. And it was a really exciting time, I think, for our team to be involved from day one in vaccine rollout.
Laura: Yeah. It was definitely a fast and furious process for our team to collect and verify the data in mid- and late December and throughout the month of January, as we were figuring out all of our processes, which are quite streamlined now. But as we're trying to figure all of that out, as a reminder, our participants are still going through their weekly swabbing every single week, regardless of whether or not they got the vaccine. And so part of how you calculate vaccine effectiveness is to look at breakthrough infections or people who have been vaccinated for SARS-COVI-2 and yet still have a positive nasal swabs, still do get a positive test result indicating that they're infected. And so it's super, super important, as you can imagine, in doing these calculations to make sure that if you say, "Oh, this person is positive and they were vaccinated recently" that you have incredibly accurate and trustworthy information about exactly what date they were vaccinated and which product they received because all of this feeds into those VE [vaccine effectiveness] calculations.
Meredith: Yeah. So we're excited to share that the preliminary vaccine effectiveness results were published in an MMWR [Morbidity and Mortality Weekly Report] that was released in the first week of April 2021. Part of that analysis, our results from the RECOVER network, were combined with a similar cohort study out at the University of Arizona in order to have adequate sample size for that analysis. And so we're super excited to be a part of that conversation about the COVID-19 vaccines and continue to look forward to a more complete and final vaccine effectiveness analysis that's expected from the study early this summer.