During a virtual roundtable hosted by the Stigler Center at the University of Chicago’s Booth School of Business, participants highlighted the dangers of embracing new surveillance technology too quickly during the COVID-19 pandemic, warning that federal and corporate overreach could end up infringing on individual privacy.
The discussion, part of the Stigler Center’s series on the “Political Economy of COVID-19,” featured Lior Strahilevitz, a professor at the University of Chicago Law School, and Julia Angwin, editor-in-chief of The Markup, a news site that investigates the tech industry. (Angwin also graduated from the U. of C.) The conversation was moderated by Betsy Reed, editor-in-chief of The Intercept.
Angwin began the discussion by pointing to possible similarities between the development of public health technology to help combat the pandemic and the federal government’s creation of surveillance programs after the September 11 terrorist attacks.
“After 9/11, when this nation faced a crisis...we learned that the U.S. secretly enacted a whole bunch of mass surveillance programs to protect themselves against future terrorist events,” she said. “People are turning again to tech and surveillance as a solution — it’s very tempting to think we could technology our way through this.”
For example, contact tracing — identifying and tracking people who have been diagnosed with COVID-19 in order to determine possible chains of transmission for the disease — can often require widespread data collection in order to be implemented on a large scale. “How can we track this disease without actually tracking and building a database of every single person’s actual movements?”, said Angwin. “We’ve never built a complete database of everyone’s live moments — the ramifications of that are almost boggling to the mind.”
As Strahilevitz pointed out, laws in the United States don’t necessarily prevent large-scale data collection for public health uses. That’s also the case in South Korea, where a 2015 amendment to the country’s “stringent data privacy law” ultimately allowed the government to collect and share individual data with government agencies and the public.
“If (data) is used for law-enforcement purposes, then the judiciary looks at these issues with a lot of skepticism, whereas when it’s used solely for public health purposes then the judiciary tends to be much more permissive with regard to what the government wants to do,” said Strahilevitz. “We have to be thinking about how to give people more awareness of what’s happening and more effective autonomy when we’re in non-emergency times. Government regulators have largely not been up to task of protecting people.”
Both Angwin and Strahilevitz argued that the problem with allowing for the creation of databases like this is the possibility of “mission creep”: the gradual expansion of a program beyond its initial scope. “The Patriot Act turned into, ‘Here are all these investigative tools the Justice Department has wanted for a long, long time.’ There was an opportunity amidst very substantial public anxiety to give the DOJ the wish list and have it fly through Congress,” said Strahilevitz.
He pointed to the Australian government’s new contact tracing app as a positive example of how to limit the extent to which data is collected from people. The app holds onto data for only 21 days, and there is no centralized database with everyone’s information — instead, each person’s data is only stored on their own phone.
But in the United States, Angwin said, it is private tech companies — Apple and Google are developing the framework for a new contact-tracing application — rather than the federal government, that have advocated the most for privacy protection.
“The Apple and Google proposal is a very thoughtful one. It’s mostly decentralized, and they’ve tried hard not to build a central database for users,” she said. “But it’s not what actually governments want. Apple and Google built the backend but said that the apps themselves should be built by public health authorities, but they want to track themselves and build a database.” Angwin added that, even a decade ago, tech companies arguing in favor of these privacy measures would have been unlikely, and that public scrutiny seems to have helped change their behavior for the better.
Strahilevitz also noted that there are sizable contingents of employees within companies like Apple and Google who are privacy advocates. “Google is a they, not an it,” he said. “They’re worried about what their own employees are thinking. They care a lot about retaining talent.”
Reed ended the discussion by asking Angwin and Strahilevitz what other kinds of technological developments they were worried about. Angwin pointed to predictive algorithms, which make forecasts about people’s behaviors and preferences. They’re often used in advertising or entertainment, but also show up frequently in police work — the Chicago Police Department uses predictive algorithms to determine who is at risk of experiencing gun violence.
But Angwin worries about what will happen once these algorithms become even more widespread than they already are. “We’re going down the road of thinking we can predict human behavior by collecting a few signals from people’s phones,” she said. “I think it’s really scary to think about being judged on your future behavior. You sentence a lot of people almost at birth.”
Strahilevitz, for his part, alluded to recent news reports that the facial recognition company Clearview AI has offered to help federal and state governments with contract tracing during the pandemic.
“When I hear about potential collaborations between the government and Clearview AI to use facial recognition I shudder,” he said. “Those kinds of tools are gonna so alarm the public. I think those are the kinds of tools where the benefits of using them are not zero, but the harms are really substantial -- I don’t think the government should be employing those kinds of tools.”