Live Facial Recognition Treats Everyone as a Potential Suspect, Undermining Privacy and Eroding Presumed Innocence

7 hours ago 6
Madeleine Stone
  • by CIVICUS
  • Wednesday, June 18, 2025
  • Inter Press Service

Jun 18 (IPS) -
 
CIVICUS discusses the dangers of live facial recognition technology with Madeleine Stone, Senior Advocacy Officer at Big Brother Watch, a civil society organisation that campaigns against mass surveillance and for digital rights in the UK.

Madeleine StoneThe rapid expansion of live facial recognition technology across the UK raises urgent questions about civil liberties and democratic freedoms. The Metropolitan Police have begun permanently installing live facial recognition cameras in South London, while the government has launched a £20 million (approx. US$27 million) tender to expand its deployment nationwide. Civil society warns that this technology presents serious risks, including privacy infringements, misidentification and function creep. As authorities increasingly use these systems at public gatherings and demonstrations, concerns grow about their potential to restrict civic freedoms.

How does facial recognition technology work?

Facial recognition technology analyses an image of a person’s face to create a biometric map by measuring distances between facial features, creating a unique pattern as distinctive as a fingerprint. This biometric data is converted into code for matching against other facial images.

It has two main applications. One-to-one matching compares someone’s face to a single image – like an ID photo – to confirm identity. More concerning is one-to-many matching, where facial data is scanned against larger databases. This form is commonly used by law enforcement, intelligence agencies and private companies for surveillance.

How is it used in the UK?

The technology operates in three distinct ways in the UK. Eight police forces in England and Wales currently deploy it, with many others considering adoption. In retail, shops use it to scan customers against internal watchlists.

The most controversial is live facial recognition – mass surveillance in real time. Police use CCTV cameras with facial recognition software to scan everyone passing by, mapping faces and instantly comparing them to watchlists of wanted people for immediate interception.

Retrospective facial recognition works differently, taking still images from crime scenes or social media and running them against existing police databases. This happens behind closed doors as part of broader investigations.

And there’s a third type: operator-initiated recognition, where officers use a phone app to take a photo of someone they are speaking to on the street, which is checked against a police database of custody images in real time. While it doesn’t involve continuous surveillance like live facial recognition, it’s still taking place in the moment and raises significant concerns about the police’s power to perform biometric identity checks at will.

What makes live facial recognition particularly dangerous?

It fundamentally violates democratic principles, because it conducts mass identity checks on everyone in real time, regardless of suspicion. This is the equivalent to police stopping every passerby to check DNA or fingerprints. It gives police extraordinary power to identify and track people without knowledge or consent.

The principle at the heart of any free society is that suspicion should come before surveillance, but this technology completely reverses this logic. Instead of investigating after reasonable cause, it treats everyone as a potential suspect, undermining privacy and eroding presumed innocence.

The threat to civic freedoms is severe. Anonymity in crowds is central to protest, because it makes you part of a collective rather than an isolated dissenter. Live facial recognition destroys this anonymity and creates a chilling effect: people become less likely to protest knowing they’ll be biometrically identified and tracked.

Despite the United Nations warning against using biometric surveillance at protests, UK police have deployed it at demonstrations against arms fairs, environmental protests at Formula One events and during King Charles’s coronation. Similar tactics are being introduced at Pride events in Hungary and were used to track people attending opposition leader Alexei Navalny’s funeral in Russia. That these authoritarian methods now appear in the UK, supposedly a rights-respecting democracy, is deeply concerning.

What about accuracy and bias?

The technology is fundamentally discriminatory. While algorithm details remain commercially confidential, independent studies show significantly lower accuracy for women and people of colour as algorithms have largely been trained on white male faces. Despite improvements in recent years, the performance of facial recognition algorithms remains worse for women of colour.

This bias compounds existing police discrimination. Independent reports have found that UK policing already exhibits systemic racist, misogynistic and homophobic biases. Black communities face disproportionate criminalisation, and biased technology deepens these inequalities. Live facial recognition technology can lead to discriminatory outcomes even with a hypothetically perfectly accurate algorithm. If police watchlists were to disproportionately feature people of colour, the system would repeatedly flag them, reinforcing over-policing patterns. This feedback loop validates bias through the constant surveillance of the same communities.

Deployment locations reveal targeting patterns. London police use mobile units in poorer areas with higher populations of people of colour. One of the earliest deployments was during Notting Hill Carnival, London’s biggest celebration of Afro-Caribbean culture – a decision that raised serious targeting concerns.

Police claims of improving reliability ignore this systemic context. Without confronting discrimination in policing, facial recognition reinforces the injustices it claims to address.

What legal oversight exists?

None. Without a written constitution, UK policing powers evolved through common law. Police therefore argue that vague common law powers to prevent crime oversee their use of facial recognition, falsely claiming it enhances public safety.

Parliamentary committees have expressed serious concerns about this legal vacuum. Currently, each police force creates its own rules, deciding deployment locations, watchlist criteria and safeguards. They even use different algorithms with varying accuracy and bias levels. For such intrusive technology, this patchwork approach is unacceptable.

A decade after police began trials began in 2015, successive governments have failed to introduce regulation. The new Labour government is considering regulations, but we don’t know whether this means comprehensive legislation or mere codes of practice.

Our position is clear: this technology shouldn’t be used at all. However, if a government believes there is a case for the use of this technology in policing, there must be primary legislation in place that specifies usage parameters, safeguards and accountability mechanisms.

The contrast with Europe is stark. While imperfect, the European Union’s (EU) AI Act introduces strong safeguards on facial recognition and remote biometric identification. The EU is miles ahead of the UK. If the UK is going to legislate, it should take inspiration from the EU’s AI Act and ensure prior judicial authorisation is required for the use of this technology, only those suspected of serious crimes are placed on watchlists and it is never used as evidence in court.

How are you responding?

Our strategy combines parliamentary engagement, public advocacy and legal action.

Politically, we work across party lines. In 2023, we coordinated a cross-party statement signed by 65 members of parliament (MPs) and backed by dozens of human rights groups, calling for a halt due to racial bias, legal gaps and privacy threats.

On the ground, we attend deployments in Cardiff and London to observe usage and offer legal support to wrongly stopped people. Reality differs sharply from police claims. Over half those stopped aren’t wanted for arrest. We’ve documented shocking cases: a pregnant woman pushed against a shopfront and arrested for allegedly missing probation, and a schoolboy misidentified by the system. The most disturbing cases involve young Black people, demonstrating embedded racial bias and the dangers of trusting flawed technology.

We’re also supporting a legal challenge submitted by Shaun Thompson, a volunteer youth worker wrongly flagged by this technology. Police officers surrounded him and, although he explained the mistake, held him for 30 minutes and attempted to take fingerprints when he couldn’t produce ID. Our director filmed the incident and is a co-claimant in a case against the Metropolitan Police, arguing that live facial recognition violates human rights law.

Public support is crucial. You can follow us online, join our supporters’ scheme or donate monthly. UK residents should write to MPs and the Policing Minister. Politicians need to hear all of our voices, not just those of police forces advocating for more surveillance powers.

GET IN TOUCHWebsiteFacebookInstagramTikTokTwitter/Big Brother WatchTwitter/Madeleine StoneYouTube

SEE ALSOFacial recognition: the latest weapon against civil society CIVICUS Lens 23.May.2025 Weaponised surveillance: how spyware targets civil society CIVICUS Lens 24.Apr.2025 Human rights take a backseat in AI regulation CIVICUS Lens 16.Jan.2024

Follow @IPSNewsUNBureau
Follow IPS News UN Bureau on Instagram

© Inter Press Service (2025) — All Rights Reserved. Original source: Inter Press Service

Read Entire Article






<