“COVID19 Exposes the Shallowness of Our Privacy Theories”

Fight Censorship, Share This Post!

I’ve long much admired Prof. Bambauer’s work, and when I saw this forwarded to a lawprof discussion list I’m on, I asked her for permission to repost it:

The importance of testing and contact tracing to slow the spread of the novel coronavirus is now pretty well understood. The difference between the communities that do it and the ones that don’t is disturbingly grim (see, e.g., South Korea versus Italy). In a large population like the U.S., contact tracing and alerts will have to be done in an automated way with the help of mobile service providers’ geolocation data. The intensive use of location data in South Korea has led many commenters to claim that the strategy that’s been so effective there cannot be replicated in western countries with strong privacy laws.

Descriptively, it’s probably true that privacy law and instincts in the US and EU will hinder virus surveillance. The European Commission’s recent guidance on GDPR’s application to the COVID-19 crisis states that EU countries would have to introduce new legislation in order to use telecommunications data to do contact tracing, and that the legislation would be reviewable by the European Court of Human Rights. No member states have done this. Even Germany, which has announced the rollout of a cellphone tracking and alert app has decided to make the use of the app voluntary. This system will only be effective if enough people opt into it. (One study suggests the minimum participation rate would have to be “near universal,” so this does not bode well.)

And in the U.S., privacy advocacy groups like EPIC are already gearing up to challenge the collection of cellphone data by federal and state governments based on recent Fourth Amendment precedent finding that individuals have a reasonable expectation of privacy in cell phone location data. And nearly every opinion piece I read from public health experts promoting contact tracing ends with some obligatory handwringing about the privacy and ethical implications. Research universities and units of government that are comfortable advocating for draconian measures of social distancing and isolation find it necessary to stall and consult their IRBs and privacy officers before pursuing options that involve data surveillance.

While ethicists and privacy scholars certainly have something to teach regulators during a pandemic, the Coronavirus has something to teach us in return. It has thrown harsh light on the drawbacks and absurdities of rigid individual control over personal data.

Objections to surveillance lose their moral and logical bearings when the alternatives are out-of-control disease or mass lockdowns. Compared to those, mass surveillance is the most liberty-preserving option. Thus, instead of reflexively trotting out privacy and ethics arguments, we should take the opportunity to examine some of the assumptions that are baked into our privacy laws now that they are being tested.

At the highest level of abstraction, the pandemic should remind us that privacy is, ultimately, an instrumental right. It is meant to achieve certain social goals in fairness, safety, and autonomy. It is not an end in itself. When privacy is cloaked in the language of fundamental human rights, its instrumental function is lost.

Like other liberties in movement and commerce, conceiving of privacy as something that is under each individual’s control is a useful rule-of-thumb when it doesn’t conflict too much with other people’s interests. But the COVID-19 crisis shows that there are circumstances under which privacy as an individual right frustrates the very values in fairness, autonomy, and physical security that it is supposed to support.

I have argued in the past that privacy should be understood as a collective interest in risk management, like negligence law, rather than a property-style right. Even if that idea is unpalatable in normal times, I would hope lawmakers can see the need to take decisive action in support of data-sharing during crises like this one. At a minimum epidemiologists and cellphone service providers should be able to rely on implied consent to data-sharing, just as the tort system allows doctors to presume consent for emergency surgery when a patient’s wishes cannot be observed in time.

In fact we should go further than this. There is a moral imperative to ignore even express lack of consent when withholding important information puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy (e.g. New York’s requirement that doctors notify their patient’s partners about a positive HIV test if their patient fails to do so), this same logic applies at scale to the collection and analysis of data during a pandemic.

Another reason consent is inappropriate is that it mars quantitative studies with selection bias. Medical reporting on the transmission and mortality of COVID-19 has had to rely much too heavily on data coming out of the Diamond Princess cruise ship because for a long time it was the only random sample—the only time that everybody was screened.

The United States has done a particularly poor job tracking the spread of the virus because faced with a shortage of tests, the CDC compounded our problems by denying those tests to anybody that didn’t meet specific criteria (a set of symptoms and either recent travel or known exposure to a confirmed case.) These criteria all but guaranteed that our data would suggest coughs and fevers are necessary conditions for coronavirus, and it delayed our recognition of community spread. If we are able to do antibody testing in the near future to understand who has had the virus in the past, that data would be most useful over swath of people who have not self-selected into a testing facility.

If consent is not an appropriate concept for privacy during a crisis, that suggests there is a defect in its theory even outside of crisis time. We can improve on the theoretical underpinnings of privacy law by embracing the fact that privacy is an instrumental concept. If we are trying to achieve certain goals through its use—goals in equity, fairness, and autonomy—we should increase our effort to understand what types of uses of data implicate those outcomes, and how they can be improved through moral and legal obligations.

Fortunately, that work is already advancing at a fast clip in debates about socially responsible AI. If our policies can ensure that machine learning applications are sufficiently “fair,” and if we can agree on what fairness entails, lawmakers can begin the fruitful and necessary work of shifting privacy law away from prohibitions on data collection and sharing and toward limits on its use.

Health care privacy isn’t my field, so I can’t speak independently about this, but if you can point to interesting articles on the other side, please pass them along.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.