Posted on: World Economic Forum
In April 2021, the European Commission (EC) released its much-awaited Artificial Intelligence Act, a comprehensive regulatory proposal that classifies AI applications under distinct categories of risks. Among the identified high-risk applications, remote biometric systems, which include facial recognition technology (FRT), were singled out as particularly concerning. Their deployment, specifically in the field of law enforcement, may lead to human rights abuses in the absence of robust governance mechanisms.
Law enforcement and facial recognition technology
Across jurisdictions, policymakers are increasingly aware of both the opportunities and risks associated with law enforcement’s use of FRT. Here facial recognition refers to the process of the (possible) recognition of a person by comparing a probe image (photos or movies/stills of suspects or persons of interest) to facial images of criminals and missing persons stored in one or multiple reference databases to advance a police investigation.
On one hand, FRT has the potential to help resolve, stop and prevent crimes and bring offenders to justice. More specifically, it could be useful for various types of investigations, including finding the identity of an ATM fraud criminal, looking for a terrorist in public spaces, fighting child abuse or even finding missing persons. On the other hand, early experience shows that without proper oversight, FRT could result in abuses of human rights and harm citizens.
In this context, striking the right balance appears difficult. Policymakers may explore various options ranging from an outright ban to the introduction of additional accountability mechanisms to limit the risk of wrongful arrests. In the US, cities such as San Francisco, Oakland and Boston have banned the use of FRT by public agencies, while the states of Washington, Virginia and Massachusetts have introduced legislation to regulate its use. In other regions, court decisions play an important role in shaping the policy agenda. The UK Court of Appeal ruled unlawful the deployment of FRT by the South Wales Police to identify wanted persons at certain events and public locations where crime was considered likely to occur.
At a more global level, the United Nations Office of the High Commissioner for Human Rights’ (OHCHR) recent report on the right to privacy in the digital age recommends governments halt the use of remote biometric recognition in public spaces in real-time until they can show there are no significant issues with accuracy or discriminatory effects. It also suggests that these AI systems must comply with robust privacy and data protection standards.
Facial recognition technology requires a robust governing structure
Despite these important developments, most governments around the world recognize the potential of facial recognition systems for national safety and security but are still grappling with the challenges of regulating FRT because crucial considerations have been largely overlooked. If we were to authorize the proportional use of FRT for legitimate policing aims, what oversight body should be in charge of assessing the compliance of law enforcement activities with human rights and following potential complaints from citizens? How might we maintain a high level of performance of the FRT solutions deployed? What procurement processes should be in place for law enforcement agencies?
To address these challenges, the World Economic Forum – in partnership with the International Criminal Police Organization (INTERPOL), the Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the Netherlands police – has released a white paper that introduces a governance framework structured around two critical components:
- A set of principles for action that defines what constitutes responsible use of facial recognition for law enforcement investigations by covering all relevant policy considerations;
- A self-assessment questionnaire that details the requirements that law enforcement agencies must respect to ensure compliance with the principles for action.
As such, this initiative represents the most comprehensive policy response to the risks associated with FRT for law enforcement investigations, led by a global and multistakeholder community.
This project is now entering the pilot phase. During this period, we will test the governance framework to ensure its achievability, relevance, usability and completeness. We will update it based on the observed results.
The Netherlands police force is the first law enforcement agency that has agreed to participate in the testing process. Yet, considering the sensitivity of this use case, we strongly encourage other law enforcement agencies to join us and contribute to this global effort. We also invite policymakers, industry players, civil society representatives and academics engaged in the global policy debate about the governance of facial recognition technology to join our initiative.
Once this pilot phase is completed, we will update the principles and the self-assessment questionnaire, and the final version will be published.