Several months ago, saying that the “cure” that facial recognition offers is worse than the ills it solves would have seemed hyperbolic. But now, the metaphor has become all too literal — and the medicine it promises isn’t quite so easy to reject when sickness is sweeping the globe.

Even as it depresses economies across the world, the coronavirus pandemic has sparked a new period of growth and development for facial recognition technology. Creators pitch their tools as a means to identify sick individuals without risking close-contact investigation.

In China, the biometrics company Telpo has launched non-contact body temperature measurement terminals that — they claim — can identify users even if they wear a face mask. Telpo is near-evangelical about how useful its technology could be during the coronavirus crisis, writing that “this technology can not only reduce the risk of cross infection but also improve traffic efficiency by more than 10 times […] It is suitable for government, customs, airports, railway stations, enterprises, schools, communities, and other crowded public places.”

COVID-19: A Push Towards Dystopia?

At a surface glance, Telpo’s offerings seem…good. Of course we want to limit the spread of infection across public spaces; of course we want to protect our health workers by using contactless diagnostic tools. Wouldn’t we be remiss if we didn’t at least consider the opportunity?

And this is the heart of the problem. The marketing pitch is tempting in these anxious, fearful times. But in practice, using facial recognition to track the coronavirus can be downright terrifying. Take Russia as an example — according to reports from BBC, city officials in Moscow have begun leveraging the city’s massive network of cameras to keep track of residents during the pandemic lockdown.

In desperate times like these, the knee-jerk suspicion that we typically hold towards invasive technology wavers. We think that maybe, just this once, it might be okay to accept facial recognition surveillance — provided, of course, that we can slam the door on it when the world returns to normal. But can we? Once we open Pandora’s box, can we force it shut again?

In March, the New York Times reported that the White House had opened talks with major tech companies, including Facebook and Google, to assess whether using aggregated location data sourced from our mobile phones would facilitate better containment of the virus. Several lawmakers immediately pushed back on the idea; however, the discussion does force us to wonder — would we turn to more desperate measures, like facial surveillance? How much privacy would we sacrifice in exchange for better perceived control over the pandemic?

Understanding America’s Surveillance Culture Risk

I’ve been thinking about this idea ever since January, when an expose published by the New York Times revealed that a startup called Clearview AI had quietly developed a facial recognition app capable of matching unknown subjects to their online images and profiles — and promptly peddled it to over 600 law enforcement agencies without any public scrutiny or oversight. Clearview stands as a precursor; a budding example of what surveillance culture in America could look like, if left unregulated. One quote in particular sticks in my head.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” David Scalzo, the founder of a private equity firm currently investing in Clearview commented for the Times. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

Scalzo’s offhand, almost dismissive tone strikes an odd, chilling contrast to the gravity of his statement. If facial recognition technology will lead to a surveillance-state dystopia, shouldn’t we at least try to slow its forward momentum? Shouldn’t we at least consider the dangers that a dystopia might pose — especially during times like these, when privacy-eroding technology feels like a viable weapon against global pandemic?

I’m not the only one to ask these questions. Since January’s expose, Clearview AI has come under fire from no fewer than four lawsuits. The first castigated the company’s app for being an “insidious encroachment” on civil liberties; the second took aim both at Clearview’s tool and the IT products provider CDW for its licensing of the app for law enforcement use, alleging that “The [Chicago Police Department] […] gave approximately 30 [Crime Prevention and Information Center] officials full access to Clearview’s technology, effectively unleashing this vast, Orwellian surveillance tool on the citizens of Illinois.” The company was also recently sued in Virginia and Vermont.

All that said, it is worth noting that dozens of police departments across the country already use products with facial recognition capabilities. One report on the United States’ facial recognition market found that the industry is expected to grow from $3.2 billion in 2019 to $7.0 billion by 2024. The Washington Post further reports that the FBI alone has conducted over 390,000 facial-recognition searches across federal and local databases since 2011.

Unlike DNA evidence, facial recognition technology is usually relatively cheap and quick to use, lending itself easily to everyday use. It stands to reason that if better technology is made available, usage by public agencies will become even more commonplace. We need to keep this slippery slope in mind. During a pandemic, we might welcome tools that allow us to track and slow the spread of disease and overlook the dangerous precedent they set in the long-term.

Given all of this, it seems that we should, at the very least, avoid panic-prompted decisions to allow facial recognition — and instead, consider what we can do to avoid the slippery slope that facial recognition technology poses.

Are Bans Protection? Considering San Francisco

In the spring of 2019, San Francisco passed legislation that outright forbade government agencies from using tools capable of facial surveillance — although the ruling was amended to allow for equipped devices if there was no viable alternative. The lawmakers behind the new ordinance stated their reasoning clearly, writing that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits.”

They have a point. Facial recognition software is notorious for its inaccuracy. One new federal study also found that people of color, women, older subjects, and children faced higher misidentification rates than white men.

“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” Jay Stanley, a senior policy analyst at the American Civil Liberties Union (ACLU), told the Washington Post. “But the technology’s flaws are only one concern. Face recognition technology — accurate or not — can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”

While it’s still too early to have a clear gauge on the ban’s efficacy, it is worth noting that the new legislation sparked a few significant and immediate changes to the city’s police department. In December, Wired reported that “When the surveillance law and facial recognition ban were proposed in late January, San Francisco police officials told Ars Technica that the department stopped testing facial recognition in 2017. The department didn’t publicly mention that it had contracted with DataWorks that same year to maintain a mug shot database and facial recognition software as well as a facial recognition server through summer 2020.”

The department scrambled to dismantle the software after the ban, but the department’s secretive approach remains problematic. The very fact that the San Francisco police department was able to acquire and apply facial recognition technology without public oversight is troubling.The city’s current restrictions offer a stumbling block by limiting acceptance of surveillance culture as a normal part of everyday life — and prevent us from automatically reaching for it as a solution during times of panic.

A stumbling block, however, is not an outright barricade. Currently, San Francisco is under a shelter-in-place mandate; as of April 6, it had a reported 583 confirmed cases and nine deaths. If the situation worsens, could organizers suggest that the city make an exception and use facial recognition tracking to flatten the curve, just this once? It’s a long-shot hypothetical, but it does lead us to wonder what could happen if we allow circumstances to convince us into surveillance culture, one small step at a time.

Bans can only do so much. While the San Francisco ruling proves that Scalzo’s claim that “Laws have to determine what’s legal, but you can’t ban technology” isn’t strictly speaking correct, the sentiment behind it remains. Circumstances can compel us into considering privacy-eroding tech even as those explorations lead us down a path to dystopia.

So, in a way, Scalzo is right; the proliferation of facial recognition technology is inevitable. But that doesn’t mean that we should give up on bans and protective measures. Instead, we should pursue them further and slow the momentum as much as we can — if only to give ourselves time to establish regulations, rules, and protections. We can’t give in to short-term thinking; we can’t start down the slippery slope towards surveillance culture without considering the potential consequences. Otherwise, we may well find that the “cure” that facial recognition promises is, in the long term, far worse than any short-term panic.

Originally published on Hackernoon.com