The Problem of Facial (Mis)Recognition

|


In the past five years, facial recognition systems have frequently misrecognised the faces of minority groups. For instance, Amazon’s Rekognition API, during an experiment, failed to recognize 11 persons-of-colour who are members of US Congress, and misidentified them with criminal mug shots[1]. It does not stop there, with multiple instances reported around the world of various facial recognition systems failing to even recognize women[2]. This trend of facial misrecognition, albeit socially alarming, is technically unsurprising since most facial recognition algorithms are trained on datasets containing images of white men[3]. This problem lies at the heart of many campaigns around the world which call to ban the use of facial recognition technology, with notable successes in Boston[4] and San Francisco[5]. Regardless of how noble this cause may be, it remains crucial to understand the various impacts of such a call.

Proponents of the technology like Benji Hutchinson, Vice President of federal operations for NEC Corporation of America, believe there is no need for complete ban on the technology, nor over-legislation[6]. Hutchinson contends that the technology has been wildly successful in stopping terror attacks and wiping criminals off the street. The rationale is that the character of security is changing[7]; crimes happen faster and there is seldom any build-up to the criminal act[8]. Law enforcement agencies are required to respond to these acts much faster now, and facial recognition helps law enforcement identify their targets within seconds. This compression of time is accompanied by stealthier attackers. Numerous reported instances have claimed that it is still difficult to trace a criminals’ whereabouts immediately before and after the crime. Most notable was when Abdelhamid Abaooud – the alleged mastermind of the 2015 Paris Attack – hid near the Bataclan prior to the attack for an hour but was not reported to the police[9]. It was only after investigations began that one eyewitness stepped up and claimed they saw Abaooud huddled up near the Bataclan thrice before and after the attack.

Dealing with such situations warrants a repetitive cycle of constant vigilance and fast response without experiencing information overload. However, in a world without facial recognition, we ask law enforcement agencies to rely on manual methods of incident response, which is constrained by time and the criminals’ stealth.  A successful argument for outlawing facial recognition systems ought to prove the existence of an ultimate form of irreversible harm without remedy along with existing and lucrative alternatives. If we assume that governments act as if they are utilitarian, then persuading them to outlaw facial recognition systems seems unlikely since facial recognition systems are providing Government agencies with fast and reliable means of assessing crimes.  However, in a world where governments act as if they are not completely utilitarian, the absence of a lucrative alternative may make governments shut out any criticism to their use of facial recognition systems, even the ones asking for regulations.  Osoba and Yeung, in one of their commentaries for The Hill, contend that public outcry to ban facial recognition technology crowds out chances to negotiate safer policy outcomes[10] and steer public discourse away from how to do it.

Current conversations regarding facial recognition systems have an unfair burden of mitigating all past instances of misrecognition and injustices with every new development. The merit of an updated algorithm in the field is supposed to be judged by how it contributes to the overall function of the technology; instead, it is currently judged solely by how it contributes towards fixing facial misrecognition. This means that a company can deploy new algorithms and have one of the best face spoof detection accuracy but fail to receive much appreciation[11]. Even though face spoof detection can be as important to many deep-learning teams worldwide, the value of their work appears to be less. Fixing misrecognition is a gradual process. Many companies like Gfycat are already engaged in building more representative datasets, while simultaneously researching on other, equally big security issues like defending adversarial attacks, face spoofing, etc. The focus on only correcting facial misrecognition has unfortunately reduced such research focuses to extraneous developments.

Moreover, such narratives, albeit noble in cause, provide disincentives for market-leaders like IBM to further develop not only the technology, but the ethics that should govern the use of technology. In the absence of lucrative alternatives and a shrinking market of research leaders like IBM[12], the question still lingers – do we begrudgingly accept facial misrecognition as necessary evil? We are already aware of facial misrecognitions’ pervasive nature. Yet to channel our awareness methodically so we can aide policies for a safer technology, universal standards of regulations are required which prioritize transparency and efficiency for both state and private entities. 

The current state of data privacy regulations worldwide is murky. The EU collectively focuses on privacy as a basic right of every EU citizen by protecting citizens’ sensitive data and empowering them to better control it. The GDPR (General Data Protection Regulation) act introduced in 2018 seeks privacy by default and by design, imposing stricter controls over cross-border data transfers and enabling EU citizens the right-to-be-forgotten. The US data regulation policies, however, are sector specific whereby individual sectors are allowed to draft their own privacy regulations in accordance to state-level legislations[13]. For instance, HIPAA (Health Insurance Portability and Accountability Act) regulates health data and GLBA (Gramm-Leach-Bliley Act) regulates the financial data, with each regulatory body protecting different aspects of personal data. This means that some information that would be protected by GDPR may not be protected under the US Law. For example, if I were to sit in Turin, Italy and shop for some denims from Jordache, my personal information and buying pattern data will be protected under the GDPR, but if I purchase the same item in Texas, US, I risk having my personal information and shopping data siphoned to the highest bidder.

A global standard would solve this by outlining what kind of facial recognition data can be available for both private agencies to access, and state law enforcement agencies to request recognition scan upon. This proposition imagines a world where states cooperate and discuss how, when and where we are running facial recognition scans and collecting data. Such discourse could evaluate anything from modes of encryption and decryption of each scan, to broader issues of how cyber laws can be strengthened to safeguard such data. It becomes fundamentally important to understand the intricacies of which data is of national importance and which are universally important.

An extension of enhanced transparency is addressing the different accuracies in recognition scans. Dr. Learned-Miller of University of Massachusetts suggested that every organization working with face recognition – from Facebook to Timesense to the FBI – should disclose statistics on the accuracy of their systems to the public[14]. In addition to declaring the general accuracy of their algorithms, enhanced transparency policies should aim to mandate all organizations disclosing their accuracy for each demographic group. This extension has the capacity to create a culture of accountability both within private facial recognition companies and state agencies.

This is nothing new: electronics and hardware industries have mandated that components like capacitors come with datasheets that describe their limitations and operating parameters to ensure accountability and best use. For facial recognition systems, we take inspiration from Gfycat and Modiface. When researchers in Gfycat realized their facial recognition often failed to recognize Asian faces, the company moved to train their algorithms via an Asian-detector where the algorithm switches to a more sensitive mode, applying stricter thresholds before declaring a match on an Asian face. The company publicly claims that their system is 98% accurate identifying white people and 93% accurate dealing with Asian faces[15]. Such public declarations, according to CEO Richard Rabbat, help to zero-in on the problem with the aim to substantially reduce such biases.

Since the available image datasets are of predominantly white males, Dr. Learned-Miller suggests that it is imperative to guide the systems by giving it an awareness of patterns of physical variation among humans such as skin tone or facial structure[16]. The facial recognition algorithms are all bounded by parameters set by humans. If we help the system learn the differences via more training on larger, representative datasets, the technology will surely improve. Modiface, for instance, used employees, friends and family to create a unique dataset of 5000 images of each ethnic group such as Middle Eastern, Hispanic and Asian to train their algorithms to be more representative, and sharper in catching differences in eye boundaries between ethnicities[17].

In the current age, safeguarding the public and the technology is an enormous, yet crucial, responsibility. In a world where market leaders like IBM become fearful of proceeding with their research, some entity must provide the means of facial recognition to Governments, lest authorities resort to secretive, unaccountable, more oppressive means of intelligence. A universal standard of transparency would ensure representative algorithms become the bedrock that distinguishes between good AI and bad AI, and that makes all the difference.


[1] Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots.” American Civil Liberties Union. June 10, 2020. Accessed October 11, 2020. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.

[2] Ibid.

[3] Lohr, Steve. “Facial Recognition Is Accurate, If You’re a White Guy.” The New York Times. February 09, 2018. Accessed October 17, 2020. https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

[4] Jarmanning, Ally. “Boston Bans Use Of Facial Recognition Technology. It’s The 2nd-Largest City To Do So.” Boston Bans Use Of Facial Recognition Technology. It’s The 2nd-Largest City To Do So | WBUR News. June 24, 2020. Accessed October 17, 2020. https://www.wbur.org/news/2020/06/23/boston-facial-recognition-ban.

[5] Lee, Dave. “San Francisco Is First US City to Ban Facial Recognition.” BBC News. May 14, 2019. Accessed October 17, 2020. https://www.bbc.com/news/technology-48276660.

[6] Brandom, Russell. “How Should We Regulate Facial Recognition?” The Verge. August 29, 2018. Accessed October 17, 2020. https://www.theverge.com/2018/8/29/17792976/facial-recognition-regulation-rules.

[7] Kaldor, Mary. New and Old Wars. Cambridge: Polity Press, 2012.

[8] Brown, Zachery Tyson. “Unmasking War’s Changing Character.” Modern War Institute. March 12, 2019. Accessed October 17, 2020. https://mwi.usma.edu/unmasking-wars-changing-character/.

[9] Brisard, Jean-Charles. “The Paris Attacks and the Evolving Islamic State Threat to France.” Combating Terrorism Center at West Point. November 16, 2017. Accessed October 17, 2020. https://ctc.usma.edu/the-paris-attacks-and-the-evolving-islamic-state-threat-to-france/.

[10] Osonde A. Osoba and Douglas C. Yeung, Opinion Contributors. “Bans on Facial Recognition Are Naïve – Hold Law Enforcement Accountable for Its Abuse.” TheHill. June 17, 2020. Accessed October 11, 2020. https://thehill.com/opinion/technology/503070-bans-on-facial-recognition-are-naive-hold-law-enforcement-accountable-for.

[11] Simonite, Tom. “The Best Algorithms Still Struggle to Recognize Black Faces.” Wired. Accessed October 17, 2020. https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/.

[12] Peters, Jay. “IBM Will No Longer Offer, Develop, or Research Facial Recognition Technology.” The Verge. June 09, 2020. Accessed October 17, 2020. https://www.theverge.com/2020/6/8/21284683/ibm-no-longer-general-purpose-facial-recognition-analysis-software.

[13] News, GDPR. “Differences between European Privacy Laws and American Privacy Laws.” Compliance Junction. April 05, 2018. Accessed October 11, 2020. https://www.compliancejunction.com/differences-european-privacy-laws-american-privacy-laws/.

[14] Simonite, Tom. “How Coders Are Fighting Bias in Facial Recognition Software.” Wired. Accessed October 11, 2020. https://www.wired.com/story/how-coders-are-fighting-bias-in-facial-recognition-software/.

[15] Ibid.

[16] Ibid.

[17] Ibid.