This page is optimized for a taller screen. Please rotate your device or increase the size of your browser window.

Lessons from Iowa: Twin Tales of Caution in the Digital Age

February 12, 2020

The opening days of February 2020 were expected to turn a spotlight on the state of Iowa as it hosted the first official voting for nominees for the 2020 presidential election. On Saturday, February 1, pollster Selzer & Company. planned to release the final results of their poll of Iowans, which had proven highly accurate at projecting the caucus winners since 1988. On Monday, February 3, Iowans would gather for the Democratic caucuses, with leaders using a new smartphone-based app to facilitate easier and quicker counting and reporting of the results for statewide tally.

Due to unrelated back-to-back mishaps, however, neither of these data collection efforts went as planned. It wasn’t a malicious cybersecurity threat or hacking from a foreign nation that stymied these groups’ efforts. Rather, several basic missteps in data collection and technology development, testing, and use were at fault, and these errors can provide some critical lessons for all of us involved in the collection, analysis, and use of data and technologies.

Polling in Iowa
A basic tenet of good research is that if we cannot have a high degree of confidence in the methods we use, then we cannot have high confidence in the results we release. Well-conducted surveys are rigorously designed to minimize as many (sadly, “all” would be impossible) potential sources of bias in the results. Everything from the questionnaire, sample selection, training of interviewers, and protocols for contacting respondents, to the reading of the questions are carefully developed, implemented, and monitored. In the Iowa poll, however, there was a factor apparently not considered—an interviewer enlarging the font size on his or her screen to better view the questions. Unfortunately, this factor came into play, and apparently had the impact of pushing at least one of the question response options off the screen.

Would the omission of one answer to one question in a single exchange necessarily skew a poll’s results? Not likely. But the larger issue was that the firm was unable in good confidence to determine if this may have happened more than once, much less how many times it may have happened. They could not assure the quality of their data and determined not to publish the poll. Clearly a difficult decision for the polling company, but in my view, the correct one.

Seemingly innocuous issues–such as changing the font on a screen for better viewing–can and do arise on occasion in complex data collection efforts. Like many of our colleagues in the survey industry, Abt Associates prioritizes rigor in our methods to ensure the highest confidence in–and quality of–the findings we produce. We, like others, can learn from the Iowa poll experience. Not only in the specifics– controlling the ability for interviewers to change their screens in any way other than as initially programmed, providing access for those who have a visual impairment or a need for a larger font through dedicated stations and additional training, tracking and monitoring any changes made to the system–but also in a broader sense. When collecting and working with data in the digital age we need to be cognizant of even the smallest details that can have outsized impacts.

The Iowa Caucuses
Two days after the poll release cancellation, as the media, political campaigns, and public were waiting for the results of the Iowa Democractic caucuses, it was reported that an error in the app developed specifically to speed the relay of individual caucus results for statewide tally was causing problems. After an investigation, it was determined that, while the app was apparently recording and storing data accurately, a coding error was leading to only partial transmission of the data. Moreover, it appears that many of the local caucus leaders only downloaded the app the day of the caucuses and thus had little familiarity with its features and use. As a result of these issues, the more traditional telephone backup system was overwhelmed, further preventing many local caucuses from reporting their numbers.  Again, it appears that there was a three-fold miss on what should be fundamentals of technology use:

  1. A critical error in a key aspect of the app that was not discovered through either code rechecking or a substantial load testing of the application,
  2. Inadequate training of app users,  and
  3. Unreliable backup system for a critical function. 


Putting in Place Precautions
Digital systems are complex things, but there are some key steps that should always be taken before technologies are rolled into “production” scenarios. At Abt, we are implementing a mandatory “code review” procedure, whereby aspects of critical code written by one developer are checked by a second. We also pre-test and ideally “load test” (test a large number of users or pass large amounts of test data through the system) to identify any potential issues. We provide clear and ample training for end users of the technologies we produce. This is admittedly easier with project interviewers than when technology is being used by someone who is not a part of a “project team” (such as on-site clinicians or program officers), but through in-app help, web-based or written materials, and help desk support we strive to ensure that end users feel confident in a digital tool’s features and uses. Finally, we develop fully functional back-up systems or protocols to ensure our failsafe mechanisms are effective.

The key lesson of Iowa is that these types of issues can affect anyone involved with the collection of data or development of digital tools to facilitate data collection, transmission or use.  The actual error may be small or seemingly innocuous, but can lead to major consequences–such as rendering data unusable or eroding confidence in the results that ultimately are released. The digital age comes with perils both large and small; to succeed in this arena, we need to keep our eye on them all.

 
Work With Us
Ready to change people's lives? We want to hear from you.
We do more than solve the challenges our clients have today. We collaborate to solve the challenges of tomorrow.