COVID-19 has forced the rapid adoption of digital healthcare – from telehealth to remote monitoring. Digital epidemiology is an area of technological adoption in medicine that can help inform the future of disease surveillance, but there are ethical questions about how this data is used.
Digital epidemiology is the use of data generated outside the public health system for disease surveillance. A new paper has highlighted how several countries have taken digital epidemiology to the next level in responding to COVID-19 in areas such as case detection, contact tracing and isolation.
The paper, published in the journal Science, explores the ethical concerns raised around the use of digital technologies and data privacy in health surveillance, and how the use of this data can inform the future of epidemiology.
The ethics of data-driven approaches to healthcare
The authors, Michelle Mello and C. Jason Wang say that several data-driven epidemiological approaches that have been proposed or trialled for COVID-19 are justified if implemented through transparent processes that involve oversight: “We question the necessity to conduct contact tracing using cell phones and other private data without users’ consent.”
Around the globe in select countries, a range of innovative uses of personal data from outside the health sector have been undertaken to meet the challenges posed by COVID-19.
The authors propose that digital epidemiology can involve both personal and social harms, but so does failing to harness the enormous power of data to arrest epidemics. Considerations in this debate, say the authors, raise important question for governments, like what is the exit strategy for any approach applied?
When it comes to respecting autonomy – asking people for permission to access their personal information – whether such infringements are likely to be effective is an important question, as is how to ensure that those involved in epidemiologic analysis of novel data sources are accountable for what they do.
The authors said: “Ordinary presumptions about what kind of data uses are ethically acceptable for governments and companies to pursue may need to flex in these times, but the key principles guiding decision making remain the same.”
The authors believe some novel epidemiological approaches described for COVID-19 are justified if implemented with the right parameters, “When the epidemic abates, it will be important to reconsider these approaches in light of what has been learned about their benefits and the public’s attitudes toward them, so that we are prepared to deploy cutting-edge methods responsibly during the next epidemic.”
The policy recommendation put forward in the paper emphasises that digital surveillance may have particular value for vulnerable groups such as the elderly and persons with chronic illness who may otherwise remain confined after others are released.
The authors state: ‘For disease outbreaks, what constitutes the least restrictive alternative depends on the available public health resources, evidence concerning what behaviours people will engage in without coercive public health orders, features of the pathogen’s transmissibility, and the stage of the epidemic. Even if such a weighing points toward the imposition of digital surveillance, the least restrictive alternative principle can help minimise privacy intrusions — for example, through data minimisation. We consider applications of these two principles to particular technologies with value in combating the novel coronavirus and similar pathogens.’
Machine learning and AI
The paper highlights the growing potential for the use of machine learning and AI in order to forecast the spread of disease, which they say does not raise the same ethical questions as digital surveillance if it does not use individually identified data. However, personally identifiable information used in algorithms ‘must be considered more seriously because of the social consequences attached to these determinations.’
The authors say: ‘Here, the main concerns are the usual ones about algorithmic bias and error, and solutions offered for such problems in other contexts are applicable. These include making code and datasets publicly accessible and subject to peer review and continuing to refine the model as additional data become available. Additional safeguards should include creating mechanisms for individuals to challenge algorithmic classifications, making classifications time limited, and using classifications to support recommendations rather than legal restrictions on individuals’ movement.’
They continue: ‘Sturdy oversight structures are not easy to stand up in the middle of an emergency. Work will be needed after the COVID-19 threat fades to ensure that we are better prepared next time.
‘There has been much talk of harnessing the power and ingenuity of the tech sector to fight disease outbreaks, but ‘harnessing’ implies carefully placed constraints and firm direction by a driver. We have yet to craft that yoke.’