AI Invades Law Enforcement

AI may soon barge into the courtroom. It certainly is sitting in the entry portals. As artificial intelligence uses DNA-driven face prediction tools to corral suspects and construct lineups, law enforcement is enticed, but individual rights may be sacrificed. Vigilance is key.
ACSH article image
Image: ACSH

Not to be outdone, AI products marketed to lawyers are on the rise. For complex review of massive data dumps, this might be a blessing -- or a curse. Most rules of the Canons of Ethics disallow billing for AI time, so we might expect profits to drop. But if more work can be taken on, that revenue might be offset. Let’s take, as an example, its use 

AI products are also marketed to help law students with research (although they are repeatedly warned that AI hallucinates, meaning it makes up cases or citations in its child-like desire to please). New technologies are being marketed (called retrieval-augmented generation (RAG) to reduce hallucinations, and “leading legal research services have released AI-powered legal research products that they claim “avoid” hallucinations and guarantee “hallucination-free” legal citation.” But limited proof exists, and lawyers use it at their peril.

AI-driven research (if carefully checked) might be of use for repeated legal inquiries, but is of limited use in cases of first impression, meaning cases raising novel legal issues where there is limited precedent or few law review commentaries available to mine for LLM’s education. 

Let’s take a typical toxic tort case as an example (although there is no evidence that the technology was in fact used in any way). As a mechanism to sift through the massive amounts of corporate documents, search for the “smoking gun,” the technology is likely to prove helpful – although there is always a risk that it will miss something and provide an incomplete search. As a means of assessing epidemiological studies, reliance on the tool would be problematic. While the search request could be tailored to ask a series of “cross-examination” questions, unless the attorney is intimately familiar with the science  (and that means all the sciences involved including epidemiology and biostatistics), reliance on “Chatty” will not allow the attorney to possess the requisite knowledge to ask follow-up questions when the expert witness goes rogue, evades the question, or simply prevaricates. (The same problem occurs when lawyers without familiarity in the field rely too heavily on their own experts for cross-examination, without first mastering the subject.)

 

Perhaps most critically, one’s adversary can do the same research and thus be forewarned of your approach or arguments. Nothing substitutes for creative, original out-thinking your rival. And AI can’t do that for you.   

The New Frontier 

Not wanting to be excluded from the AI-big-gulp, the technology has set its latest sights on law enforcement, just a short step to making its entry into the courtroom. New technologies are marketed to enable or assist suspect roundup and identification, a boon to law enforcement. The technologies are based on “forensic DNA phenotyping” (FDP). This technique uses DNA to predict facial phenotypes, investigating connections between DNA markers and appearances of various facial features. The magical mimicry of FDP using DNA to conjure the physical likeness of a human is tantalizing to law enforcement in identifying possible perps based on their DNA. For law enforcement, a DNA sample can be linked to specific traits, such as hair or eye color, which can bump up suspects with those traits to the top of the list (assuming they don’t dye their hair or wear colored contact lenses). 

As for collecting potential culprits for investigation, the facial recognition technique may also be a boon, although caveats must be considered, namely the specter of racial profiling that its use may foster. But some AI-technologists are blasting warnings regarding the product for technological reasons: One is Susan Walsh, who has her own company that utilizes the technique. But Walsh’s methodology comes with a caveat: she believes it is impossible to predict a person’s face from DNA- at least for now. And claims her competitors can’t do it either. 

Reading between the lines suggests that Walsh’s concerns may be a business contest between two proprietors hyping a novel technology. Distinguishing between valid claims and warnings of limitations is problematic.

The Use of Face Recognition Technology

The National Academy of Sciences (NAS) routinely holds conferences and publishes monographs evaluating the scientific soundness of various techniques used in criminal investigation, opining on the scientific and evidentiary rigor of techniques, such as fingerprints, footprints, and the like. According to their March 2024 review: 

Face recognition has a large number of applications, including security, person verification, Internet communication, and computer entertainment. … Systems have been developed for face detection and tracking, but reliable face recognition still offers a great challenge to computer vision and pattern recognition researchers.” – Handbook of Face Recognition

The recent NAS conference reached a tentative conclusion, noting that we can expect more laws to govern the technology in the future.

“Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.” 

The Naysayers:

Susan Walsh runs the Indiana University School of Science lab and is building an open-source tool for everyone, including law enforcement, and has criticized her main competitor, a private company located in Reston, Virginia, called Parabon NanoLabs. Founded in 2008, the company creates innovative DNA-related products, initially focusing on cancer therapies, but now has become “a prominent purveyor of forensic products, including DNA phenotyping, to police agencies.”

The company claims its proprietary “Snapshot FDP System” is capable of “accurately” predicting eye, hair, skin color, as well as face shape after working with hundreds of police agencies for the past nine years. What distinguishes Parabon’s model is the ability to integrate various measures, eye and hair color, and facial configuration into a composite model. To do so, they use something called a “principal component analysis,” a statistical method that sorts out noisy data to fuel its predictions.

There are issues. 

The specific population employed to generate the baseline comparisons has not been revealed. Hence, the legal reliability and scientific generalizability of the method to untested groups have not been evaluated. The database comes from 1,000 mostly young adult volunteers, 37% self-described as white. The actual breakdown of volunteers and their ethnic background, however, has not been disclosed. One geneticist and professor of anthropology claims that Parabon’s responses are fundamentally flawed, as a larger sample population is necessary. Presumably, that is an easy objection to overcome..  

What distinguishes Parabon’s model is the ability to integrate the various measures, eye and hair color, and facial configuration into a composite model. To do so, they use something called a “principal component analysis,” a statistical method that sorts out noisy data to fuel its predictions.

Parabon’s proprietary methodology has not been peer reviewed or faced independent scientific verification or validation. We have not been given error rates or methods, making it Daubert-ineligible in a court of law. Nevertheless, law enforcement is known to use techniques that haven’t been scientifically vetted (sometimes on the sly, since this information is not necessarily disclosable to the suspect’s attorneys).

Success stories are ample, but disasters also abound. One police department asked the company to produce a composite of a nondescript Black man with few other definitive clues. The police posted the generic image online, generating a considerable backlash, as the image did nothing more than “implicate nearly every Black man in Edmonton,” encouraging racial profiling and over-surveillance of minority and other marginalized communities. This approach can subject members of marginalized communities to an unconstitutional increase in risk of arrest, 

Do Impressive Bedfellows Convey Validity?

The company’s history is engaging and imparts some validity. First hired in 2009 by the Pentagon’s Threat Reduction Agency, its Snapshot technology was used to identify individuals in combat zones involved in building explosive devices. Reportedly, Snapshot’s CEO likened the device to a blueprint for building a human. The technology was first marketed to police departments in 2015 to create the algorithm that generates phenotypic predictions. After the software produces a phenotypic prediction, a forensic artist (no different than those used in conventional law enforcement work) steps in to shade the composite. The device also has an app that can translate the images into 3D versions. The device is limited and can only render what a young adult with normal body weight might look like. But that’s enough for several suspects to have been apprehended. In various cases, once the suspects were apprehended, they confessed.

For now, the technology is not intended to be used as evidence in criminal proceedings. Rather, it is marketed as a “behind-the-scenes” aide to police searches, a use subjecting it to the criticism that it fosters racial stereotypes. 

Because the technology is not marketed as a tool for positive identification, but rather to generate investigative leads which are not considered evidence in criminal cases, the state is not required to share information about these tools with the defense lawyers. This has constitutional ramifications, as defendants are entitled to information regarding how the police made their identification. Police use of a random DNA profile to create a dragnet rather than using reasonable and individualized suspicion will surely generate legal issues. 

Future Trends: Evidentiary Problems  Still Remain 

We can expect the technology to improve. But telling people a lineup was derived from DNA technology may carry disproportionate weight, as people place more reliance on science and mathematically driven vehicles for perpetrator identification than eye-witness testimony, which carries its own infirmities.

At this point, the investigation and finger-pointing are a tempest in a teapot. We can expect Parabon’s technology to improve and eventually undergo the scientifically mandated tests that Daubert requires before it can be used in court, or even be defended in a proper forensic context. But because the method can become another instance of racial profiling, now enhanced by the magic of DNA, further societal furor may be in the offing. 

Category
Subscribe to our newsletter