ID Tech Log


A Lead, Not an ID: The One Rule That Would Prevent Every Facial Recognition Wrongful Arrest

Facial recognition technology is one of the most powerful investigative tools in modern law enforcement. It has helped solve murders, identify trafficking victims and bring predators to justice, and crack cold cases that sat dormant for decades. In Detroit, it identified a masked gunman who killed three people at a house party—a case the police chief said would have gone unsolved without it—and the shooter is now serving three life sentences. In Maryland, it helped identify the 2018 Capital Gazette mass shooter. The LAPD used it in nearly 30,000 investigations over a decade, including gang cases where witnesses were too afraid to come forward. This technology does extraordinary good—when it is used correctly.

But in the span of a few months, two incidents on opposite sides of the Atlantic have reminded us what happens when officers skip the process and treat an algorithmic match as a positive identification instead of what it actually is: an investigative lead requiring follow-up.

These cases should concern every law enforcement professional in the country. Not because the technology failed. But because the humans using it did.

163 Days in Jail for a Crime 1,200 Miles Away

Last week, the story of Angela Lipps made national headlines. In 2025, someone used a fake military ID to steal tens of thousands of dollars from banks in Fargo, North Dakota. Fargo PD detectives asked the West Fargo PD to share surveillance footage that had been run through Clearview AI, which flagged Lipps as a potential suspect with similar facial features. Lipps is a 50-year-old Tennessee grandmother who had never set foot in North Dakota, but based on that "match" and a detective's comparison of facial features, body type, and hairstyle, officers concluded she was the perpetrator and obtained a warrant. Nobody from the department ever contacted Lipps before her arrest to verify her whereabouts. In July 2025, U.S. Marshals arrested her at gunpoint in front of four children she was babysitting. She spent 163 days in jail—first in Tennessee, then extradited to North Dakota—before her public defender presented bank statements proving she was making transactions in Tennessee at the exact times the frauds occurred. Charges were dismissed on Christmas Eve. By then, she had lost her home, her car, her dog, and her health insurance.

Worse yet, Fargo's police chief later acknowledged that detectives "assumed wrongly that [West Fargo PD] had also sent in the surveillance photos with that photo ID," meaning the case may have been built on an AI match to the suspect's fake photo ID rather than independent verification against crime scene footage. The disorganized shit-show aside, the department has since banned its detectives from using Clearview AI and implemented new reporting requirements.

Arrested 100 Miles Away in the UK

In January 2026, British police arrested Alvi Choudhury, a 26-year-old software engineer working from his parents' home in Southampton, for a burglary that occurred 100 miles away. The match came from Cognitec, a German facial recognition vendor integrated into the UK's national police infrastructure. The suspect in the CCTV footage was visibly younger (estimated around 18) with lighter skin, a bigger nose, and no facial hair. When Choudhury asked officers at the station whether the footage looked anything like him, they reportedly laughed. Yet he was held for nearly 10 hours before being released. Making matters worse, Choudhury's mugshot was only in the database because of a prior wrongful arrest in 2021, when he was actually the victim of an assault.

My Prediction: It's Going to Get Worse Before It Gets Better

Here's the part that most coverage gets wrong. The technology itself is not stagnating—it's getting remarkably accurate. NIST's Face Recognition Technology Evaluation program has tested over 200 algorithms, and the best algorithms got 20 times more accurate between 2014 and 2018 alone. Failure rates dropped from 4% to 0.2% in that four-year window. Today's top algorithms achieve error rates below 0.1% on databases of millions of faces. That's an extraordinary engineering achievement.

And yet, despite those accuracy improvements, I predict the absolute number of wrongful arrests tied to facial recognition will increase faster in the coming years—even as the relative rate of errors continues to decrease. For every 999 true positives comes one new false positive. Not because the technology is getting worse. But because adoption is scaling faster than training.

The global facial recognition market is valued at roughly $9 billion today and is projected to exceed $15 billion by 2030, growing at approximately 15% annually. Government and law enforcement represent nearly half of all spending. That means almost half of that doubling in growth in the next five years will come from law enforcement, which means more arrests. And adoption is accelerating at every level: federal, state, and local. More agencies will deploy facial recognition. More officers will use it without adequate education on its limitations. And the constant in most technical failures—human error—will do what it always does.

This is not a new dynamic. We've seen this pattern with every powerful technology that scales faster than the protocols governing its use. Nuclear energy. Automobiles. AI broadly. It's not the tool. It's how people use or misuse it.

The Case for a National Standard, Not Bans

Every major federal authority already agrees on the principle. The FBI's FACE Services Unit states explicitly that its results do not constitute positive identification—only an investigative lead. The DOJ's interim policy says facial recognition results alone cannot serve as sole proof of identity. The IACP's guiding principles say the same.

And yet there is no binding federal requirement that agencies at the state and local level follow these guidelines. NIST sets the standard for evaluating the technology's accuracy. We need an equivalent national framework for how the technology is used in the field. Mandatory training. Certification. And the bright-line rule, enforced without exception: a facial recognition match is a lead, period.

The answer is not to permanently ban the technology or defund the police. Banning deprives investigators of a tool that solves homicides, finds missing children, and identifies suspects in cases that would otherwise go cold. The answer is to set up foolproof information exchange systems and educate rigorously, systematically, at every level.

A facial recognition match is not a positive ID, and it can never be, because faces are not unique identifiers. Identical twins share the same face. Aging, lighting, angles, and image quality all introduce variability. Positive identification requires corroboration by something inherently unique: fingerprints, DNA, a documented alibi confirmed or refuted. No two fingerprints are the same, and prints must always pass through a human professional examiner. These are the things that hold up in court and, more importantly, prevent innocent people from seeing the inside of a cell. A facial recognition hit tells you where to look. Fingerprints tell you whom you found.

Without that framework, we'll keep reading these headlines. And every headline costs the victim months of their life, costs the agency hundreds of thousands in settlement dollars, and costs this profession credibility it cannot afford to lose.

It's worth pausing here to note that the same Clearview AI technology that was misused to put Lipps in jail for 163 days has also kept innocent people out of prison. In Florida, a man named Andrew Conlyn faced 15 years for vehicular homicide after a fatal car crash. His defense team spent years trying to locate a Good Samaritan who had pulled him from the passenger seat and could prove he wasn't the driver. In 2022, they ran the witness's image through Clearview AI's database and found a match within seconds. The witness was located, deposed, and charges were dropped within hours.

The Bottom Line

We don't hear about the murders solved, the trafficking victims found, the cold cases cracked. We hear about the one wrongful arrest that becomes a national headline. That's the nature of the media, and it's not going to change.

An ounce of prevention is worth a pound of reacting to the fallout of a wrongful arrest. Educate your officers. Enforce the lead-only standard through administrative and technical controls. Don't let these headlines happen in your city.

Facial recognition technology is not the enemy. It's complacency.


The views expressed here are the author's own and do not represent the official position of any agency.