Facial recognition risks being weaponized by law enforcement against marginalised communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights.
Matt Mahmoudi, AI and Human Rights Researcher at Amnesty International.
Facial surveillance systems are sophisticated technological applications that facilitate the identification and tracking of individuals based on their facial characteristics. This advanced technology employs cameras and software to capture and analyse images of people’s faces, serving multiple objectives ranging from security, and law enforcement, to marketing. Facial recognition technology is a form of AI-driven technology that analyses biometric data to accomplish identification, verification, authentication, or categorization tasks. This process relies on the computerised assessment of digital images that feature individuals’ faces and involves the use of sophisticated algorithms and software programs.
The technology behind facial surveillance systems uses algorithms that are trained to recognise certain facial features, such as the distance between the eyes, the shape of the nose and mouth, and the contours of the face. These algorithms can then be used to compare images of faces captured by cameras to a database of known faces to identify individuals. For example, the Metropolitan Police in London, UK, has deployed facial surveillance technology to scan crowds during large public events such as protests and football matches.
Facial surveillance systems have become increasingly common in recent years, with many governments and private companies using them for various purposes. Some of the applications of facial surveillance systems include monitoring public spaces for criminal activity, verifying identities at border crossings and airports, and tracking employee attendance in the workplace. The United States Customs and Border Protection agency, for instance, uses facial recognition technology to screen passengers arriving in the country.
In addition to security and law enforcement, facial surveillance systems are also being used in marketing and advertising. Retailers are using these systems to track customer behaviour and personalise marketing efforts. For example, a retail store can use facial recognition technology to identify a customer who walks into the store and retrieve their purchase history to make personalized product recommendations.
Despite the potential benefits of facial surveillance systems, they have been met with intense criticism over privacy concerns and the potential for abuse. One of the most significant concerns is that facial surveillance systems can be used to monitor innocent individuals, violating their privacy and civil liberties.
Moreover, the technology is prone to errors and bias, particularly concerning certain demographic groups, such as people of colour and women. The legality and ethical implications of using AI facial recognition technology are often unclear, especially considering that a large portion of the images used for this purpose are collected without individuals’ permission. For instance, in 2016, the University of Washington in Seattle gathered 3.3 million photos of faces from Flickr without obtaining consent and uploaded them to a database. Currently, there are no clear legal safeguards regarding the gathering of facial recognition training data – but, recently, Facebook paid a $650 million settlement for harvesting facial data (Ai Facial Recognition Technology Overview 2021).
What are the biggest privacy concerns stemming from facial recognition technology?
a. Improper data storage
Facial recognition technology relies on vast amounts of personal data, including facial images and biometric data. This data needs to be stored securely to prevent unauthorized access, misuse, or theft. However, there have been several instances where facial recognition databases have been hacked or leaked, leading to the exposure of sensitive personal data.
Improper data storage can also lead to other privacy risks. For example, facial recognition databases may include information about individuals who have not given their consent for their images to be used, or whose images have been obtained without their knowledge. In some cases, facial recognition technology has been used to identify individuals who were participating in lawful protests or other activities, raising concerns about freedom of expression and the right to privacy. Elaborating upon this aspect in context of Clearview AI, a facial identification business that sells access to its database to, among other institutions, US law enforcement, US Senator Edward Markey stated:
“If your password gets breached, you can change your password. If your credit card number gets breached, you can cancel your card. But you can’t change biometric information like your facial characteristics.”
b. Misuse of data
Facial recognition technology has been associated with several negative consequences, including false arrests, discrimination, and targeted persecution of minority groups. False positives, where an individual is wrongly identified, can result in false arrests and wrongful convictions, as happened in the case of Nijeer Parks who was misidentified and wrongfully arrested for shoplifting in the USA. Such incidents can have serious consequences for individuals, including time spent in jail and expenses incurred for legal defence. Furthermore, the use of biometric information for classification purposes can lead to profiling and discrimination, particularly if there are conscious or unconscious biases within the justice system.
Additionally, authoritarian regimes have used facial recognition technology to target specific minority groups. In China, for example, the government used Huawei’s facial recognition software to identify Uighurs, a Muslim minority group. The system allegedly could identify each person’s age, sex, and ethnicity and trigger an alarm if a member of the Uighur community was detected, leading to their persecution. Huawei reportedly provided the necessary equipment for the system, including servers and cameras. These examples illustrate the potential negative impacts of facial recognition technology and underscore the need for regulation and safeguards to ensure that its use is transparent and ethical.
c. Infringement on freedom of speech and association
The potential use of facial recognition technology as a biometric mass surveillance tool poses a significant threat to freedom of speech and association. In particular, the use of such technology for surveillance purposes can have a chilling effect on activities such as political activism, as individuals may feel inhibited or afraid to express their opinions in public. This is especially true in countries where criticism of the government is not tolerated, and the technology may be used to identify and arrest those who oppose the regime.
Additionally, the use of facial recognition technology reduces the right to anonymity, which is expected even in public spaces. When individuals’ faces are linked to facts, actions or data about them available online, it becomes easier to track their associations and activities, which can be used to limit their freedom of expression and association.
As Joshua Franco, the Deputy Director of Amnesty Tech at Amnesty International, in a vice article has pointed out, the fear and uncertainty generated by surveillance can inhibit activity more than any action taken by law enforcement agencies. If people feel they are being watched, they may self-censor or avoid public spaces altogether, which can have a chilling effect on freedom of expression and association. Therefore, it is essential to ensure that facial recognition technology is used appropriately and with appropriate safeguards to protect individual rights and freedoms.
Case Study: Palestine and The Red Wolf
Hebron, a city in the West Bank, was divided into two zones – H1 and H2 – under a 1997 agreement between Israeli authorities and the Palestinian Liberation Organization. The former, which constitutes 80% of the city, is governed by the Palestinian authorities, while the latter, which includes the Old City, is under full Israeli control. H2 is home to around 33,000 Palestinians, as well as approximately 800 Israeli settlers who reside illegally across at least seven settlement enclaves.
Palestinian residents of H2 are subjected to strict movement restrictions, with certain roads being exclusively reserved for Israeli settlers, and an intricate network of military checkpoints and other obstacles severely impeding their daily lives. On the other hand, Israeli settlers in Hebron enjoy unrestricted movement, as they are not required to use checkpoints and travel on different roads than Palestinians. These measures have a significant impact on the lives and livelihoods of Palestinians in H2, creating a deeply unequal and unjust environment.
Amnesty International has published a report titled “Automated Apartheid,” in which it discloses the existence of a new Israeli military facial recognition system called Red Wolf. Deployed at checkpoints located in Hebron, this system had not been previously reported. Red Wolf has been implemented as part of the Israeli military’s surveillance apparatus in the West Bank, where it scans the faces of Palestinians passing through checkpoints in the city. Red Wolf uses this data to determine whether an individual can pass a checkpoint, and automatically biometrically enrols any new face it scans. If no entry exists for an individual, they will be denied passage. Red Wolf could also deny entry based on other information stored on Palestinian profiles, for example, if an individual is wanted for questioning or arrest (Amnesty Int, 2023). Furthermore, Red Wolf appears to be linked to two other Israeli army surveillance systems called Wolf Pack and Blue Wolf.
Wolf Pack is a comprehensive database that contains a wealth of information on the Palestinian population residing in the occupied West Bank, such as their place of residence, familial connections, and whether they are wanted for questioning by Israeli authorities. Whenever Red Wolf scans a new face, the system automatically adds it to the Wolf Pack database.
Blue Wolf, on the other hand, is a mobile application designed for Israeli military personnel that enables them to access and retrieve information stored in the Wolf Pack database in real-time, using their smartphones or tablets. This system raises serious concerns about the privacy and human rights of Palestinians living under Israeli occupation and underscores the need for greater scrutiny of facial recognition technologies in conflict zones.
The Israeli authorities’ implementation of surveillance and facial recognition technologies is not limited to checkpoints. In East Jerusalem’s Old City, a network of thousands of CCTV cameras has been deployed to monitor and survey residents. Since 2017, the Israeli government has been upgrading this system to improve its facial recognition capabilities and expand its surveillance powers.
This growing network of CCTV cameras is linked to a comprehensive facial recognition system known as Mabat 2000, which enables Israeli authorities to identify protesters and monitor the daily activities of Palestinian residents. The reach of this surveillance system extends even to the interior of Palestinian homes, as cameras are sometimes pointed towards their windows. The proliferation of these technologies raises serious concerns about privacy and human rights violations, particularly for Palestinians living under Israeli occupation.
The emergence of Artificial Intelligence Facial Recognition technology has brought about a significant impact on society, presenting both beneficial and detrimental effects on human rights. The responsibility for the ethical implementation of this technology falls upon governments, private and public entities, educational institutions, and the international community. It is crucial to prioritize ethical principles such as transparency, accountability, and the enhancement of human life in the use of facial recognition technology. Moreover, the use of advanced technologies like facial recognition in conflict zones, particularly in Palestine, demands greater scrutiny and accountability from the international community. The utilization of Israeli military systems such as Red Wolf and surveillance technologies like Mabat 2000 at checkpoints in Hebron and East Jerusalem raises serious concerns about privacy and human rights violations. As the world has become more interconnected, it is imperative to address the ethical implications of implementing facial recognition technology. It is a shared responsibility to ensure that this technology is utilized with transparency, accountability, and respect for human rights. By doing so, we can minimize the potential risks of misusing this technology and work towards a future where AI-powered technologies serve to enhance and protect human life rather than undermine it. Thus, implementing measures to mitigate risks is crucial to paving the way for a just, safe, and equitable world powered by facial recognition technology.
About the author …
Junaid Suhais is a diligent academic writer whose research delves into the intersection of Artificial Intelligence and International Relations, specifically in diplomacy, foreign policy, and cyber security. His aim is to generate insightful and thought-provoking contributions that amplify our comprehension of these crucial areas.
Aside from his academic endeavors, Junaid Suhais is an adept photographer who captures the world’s magnificence and intricacy in a unique and innovative manner. He approaches his writing and photography with modesty and reverence, acknowledging that there is always room for growth and discovery.
Furthermore, Junaid Suhais is an independent journalist who covers diverse topics related to international affairs. He is currently pursuing his academic interests at the prestigious MMAJ Academy of International Studies, Jamia Millia Islamia.