Face Recognition AI: Friend or Foe?

As we spiral deeper into the digital age, the likes of virtual reality, cryptocurrency (Bitcoin), and machine learning are rapidly becoming our reality. No longer just a dream of the distant future: artificial intelligence is very much present in the here and now. Face Recognition Technology (FRT) has become the frontrunner of this digital race, using machine learning to identify human faces from a digital image or video. Technological advances, in conjunction with extensive facial datasets sourced from social media, has enabled FRT to advance significantly over the last few years; newly developed software such as Facebook’s Facenet has even gone so far as to surpass human performance, achieving a new record accuracy of 99.63%. The potential uses for this technology are limitless, with security and law enforcement already implementing the technology, and the marketing industry not far behind them.
However, as we journey into this new and unchartered territory, the cautionary tales depicted in many of the dystopian future films we once watched as children are starting to look less like the workings of an overactive imagination, and more so the foretelling’s of a very possible future. The question of privacy has been brought to the table many times since FRT entered the scene, with debates of its ethical morality leading to widespread condemnation. While face-recognition has come a long way since its inception, it is not yet infallible, and like its human creators, it still allows space for error within its algorithm, specifically that of racial bias. FRT has already begun to creep into our everyday lives, but with competing companies rushing to get their defective software onto this new market, the potential for this dream of the future to transform into a nightmare is very real.

Security

There’s no denying the benefits of FRT, especially its ability to revolutionise the security sector. Unlike traditional forms of identification, such as ID cards or passports, FRT not only confirms if valid identification is used, but also verifies whether the individual themselves are authorised to be there. It’s much harder to forge a face than it is a piece of paper, meaning FRT could potentially eradicate the possibility of forged documents completely. Additionally, because FRT is non-invasive, i.e. it does not require an individual’s consent or even co-operation to identify them, it not only increases the efficiency of security checks, buts adds a level of discretion in identifying unauthorised and potentially dangerous individuals that was previously impossible. This technology is particularly beneficial within high-risk areas and is already undergoing trials within airports in Sydney and set to be present in the 2020 Tokyo Olympics.

Privacy

As FRT gains popularity within mainstream media, new apps utilising the technology are constantly emerging. However, concerns over whether it is a breach of
privacy for third party companies to scrape personal data from social media platforms, without the subject’s consent and for a profit, has led to public condemnation. Clearview, an app used by the American police force, is currently under fire for its use of public data in identifying child victims, resulting in a cease-and-desist letter from both Facebook and Twitter. It is obvious from this case that people hold their privacy in high regard and, despite what the reasons may be, the invasiveness of FRT is a hard pill for many to swallow.
Further concerns regarding FRT is the potential abuse of power of stored data after its initial use. FaceApp, a popular FRT mobile app that had the whole world, including your favourite rapper and Instagram influencer, posting selfies of what they would look like in 50 years’ time, claims to delete most of its uploaded photos within 48 hours. However, they recently made headlines after customers became aware that the apps terms of use gave its Russian parent company a large amount of control over the photos uploaded. Though unlikely to see anything sinister come from this stored data, the idea of your face stored in some Russian IT guy’s hard drive can make anyone feel uncomfortable. The company’s lack of transparency raises further concerns over how ethical these FRT apps are in their endeavour to enter this new market.

Law Enforcement

Parallel to its impact on security, FRT has become a decisive tool in aiding law enforcement, succeeding where other forms of biometric identification have previously failed. Utilising mugshots, CCTV, and bodycams, they can identify and capture suspects through third party algorithms. One such case was during the capture of the Belgian terrorist dubbed ‘man in the hat’, in which police resorted to FRT software after encountering difficulties in analysing fingerprints and DNA traces. Utilising mugshots, CCTV, and bodycams, FRT has increased both the efficiency and reach of criminal investigations, with Interpol reporting more than 650 criminals, fugitives, persons of interest or missing persons having been identified since launching their facial recognition system.

Racial Bias

As Law enforcement becomes increasingly dependent on FRT, any problems with the technology could lead to dire consequences. One such consequence of this dependency is the evidence that suggest many of these FRT algorithms employed by law enforcement are racially biased, often leading to black people, females especially, being disproportionately incorrectly identified. ACLU’s testing of Amazon’s FRT Rekognition, which had previously been endorsed to law enforcement agencies, found that people of colour were disproportionately high in those who were falsely identified by the software. Similarly, The National Institute of Learning Technology has found on multiple occasions that FRT is most successful in accurately identifying white men and most inaccurate with black females.
While face-recognition is not racially biased by nature, the technology is only as adept as its creator. In layman’s terms, for the algorithm behind FRT to successfully recognize and differentiate faces, it must first learn how to identify a face and its differences in the first place, which it does by practicing with a provided set of data (facial images). Consequently, if the data set used to teach the algorithm is racially and/or gender biased, e.g. most images used are those of white men, then the algorithm, and ultimately the technology, will be biased in its accuracy. This is not solely a Western defect, as similar cases of racial bias have been found in Asian countries with white people less accurately identified than their Asian counterpart.
However, unlike the case with ethnically homogenous countries such as Japan and Korea, the diversity found in the West means that the effects of this racial bias are much more immediate and apparent. If this issue is not fixed before face-recognition becomes a staple within law enforcement, we are likely to see a higher risk of false arrests, and even imprisonment, of innocent black civilians. Further ramifications could also develop if FRT is used as evidence within a court of law while still displaying such inaccuracy and racial bias. In a country like America, where systematic racism is already a topic of conflict, any new technology that perpetuates this racial dissonance could easily be interpreted as racial targeting, and would only lead to further anger and disharmony, to possibly deadly consequences.
While FRT may prove beneficial in aiding public safety, it’s still not 100% fool proof, and it has not yet been able to win over the trust of the general public. It is already banned in San Francisco, and other cities such as Cambridge and Oakland are set to follow suit. For western countries, such as America, where issues of racism are often at the forefront of media, continued racial bias within FRT would only further sour public opinion of the technology. Until FRT solves these issues and wins the trust of the public, it will continue to encounter heavy roadblocks on its journey into the future.
Previous
Previous

Dear Adland,

Next
Next

Death Waits For No Man