When You Can Read a Wrongdoer’s Mind

In criminal law, intent is necessary before guilt can be proven. Even civil actions, like intentional torts, pivot on intentionality.  Until now, intention was determined based on circumstantial evidence, with some jury guesswork. But what if it were possible to demonstrate someone’s state of mind objectively? What if there were no hiding behind a baby face or angry denial?

Determining intent impacts not only guilt or innocence but the length of incarceration in criminal cases or the millions of punitive damage dollars in civil cases. So critical is proving intent that the law provides a safe haven for the mentally compromised or a very young minor where intent cannot exist. Jurisprudence holds the insane and the very young are incapable of the mens rea, the mental capacity necessary to sustain intentionality. 

In civil cases, the definition of intent is broader than did someone “mean” or “intend” to cause harm. Mere knowledge, with a substantial certainty that harm will occur due to a deliberate action, is sufficient to incur liability. But proving whether someone “surely knew” or “deliberately intended” to harm isn’t a slam dunk for even the most adept trial lawyer.

Reading Your Mind

Now, researchers at the University of Texas in Austin may be the prosecutor’s best friend. Their invention is a “semantic brain decoder” that ostensibly determines someone’s thoughts based on brain activity using MRI data and Chatbox AI technology.

“Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos…..”

When we hear or utter phrases, there are patterns to our brain activity called cortical semantic representations this new technology can non-invasively identify. The researchers developed a “decoder” that reconstructs language from these recorded cortical semantic representations using functional magnetic resonance imaging (fMRI). Based upon those semantic representations, they could determine the “gist” of what the participant was thinking.

Because “semantic representations are shared between language and a range of perceptual and conceptual processes,” subjective indications of intent might well be stored as verbal representations and objectively transcribable as variants in brain activity.

The scientists tested the device by asking participants to listen to podcasts or imagine themselves telling a story. They then measured the participant’s brain activity and noted a distinct and reliable connection between the brain activity and what each participant was thinking.

“Qualitative analysis shows that the decoder can recover the meaning of imagined stimuli.” 

 Brain-computer interfaces such as the one described by the researchers may be able to identify “covert speech,” imagined thoughts in the absence of external stimuli. This opens the door to another form of covert speech, intent. While cautioning that their research is far from reading our minds, the technology bodes ominously for courtroom use.  I don’t envision the technology useable as a lie detector, so it side-steps many legal obstacles.

The participant must be willing and cooperative for the device to work; failing to cooperate produces the brain wave equivalent of gibberish. For me, gibberish is an admission of an unwilling participant and an indication of guilt.  I would think a jury would come to the same conclusion. 

The admissibility of this decoder evidence in Court.

The history of admissibility of novel scientific evidence pivots around the first proto-lie detector (using a measure of systolic blood pressure) developed by one colorful character named William Marston [1].  In the 1923 case of Frye v. US (still good law in seven jurisdictions), the matter turned on the admissibility of a retraction of a murder confession, the murder of a prominent and wealthy Black Washington DC physician by a Black man previously involved in other crimes.  The retraction was made under this novel lie detector, and the admissibility of the opinion of the expert performing the test was the issue before the court.

The court ruled the expert’s testimony (and the reading of the proto-lie detector) were inadmissible, a ruling upheld on appeal. Whether the ruling was predicated on the novelty of the device, the lack of adherence to the scientific method (there were no controls), its lack of reliability, the expert’s lack of credibility, or the messy racial aspects of the case in which the court was likely reluctant to embroil itself it is hard to tell. Nevertheless, in its holding, still cited today, the court rejected the expert testimony (and the readings of his device) based on the lack of acceptability of the technology by the general scientific community.

Seventy years later, the case was superseded in most jurisdictions by the Daubert test, which is more lenient on introducing novel science than Frye but more strident on the requisite proof of the new technique’s reliability – or reproducibility.

Conventional lie-detector readings generally remain inadmissible because their reliability is inherently questionable as a feature of the device’s failings. Additionally, it’s not difficult to learn techniques to consciously control physiologic measures that influence polygraph readings (i.e., heart rate, breathing pattern, blood pressure, sweating). But the latest computerized polygraph readings are said to be quite reliable, with companies advertising 98% accuracy – at least if the examiners are well-trained. The American Polygraph Association (APA)  concurs, and roughly half the states allow their use under certain circumstances in civil court.

Regardless, no one can be forced to take a polygraph test against their will, per the 1988 federal polygraph law, The Employee Polygraph Protection Act (EPPA). Nevertheless, once agreeing to submit, in some states, the results are admissible. Like the fifth amendment, however, failure to take a lie-detector test cannot be used against a suspect.

Voice-stress analysis, another new concoction, is now relegated to the realm of the pseudo-scientific. However, it has been admitted into evidence on rare occasions and presented at serious forensic conferences. Recently, its manufacturers settled a case implicating those who used it, so we aren’t likely to see much future use.  As for Truth Serum, in 1963, the Supreme Court ruled, in Townsend v. Sain, that confessions produced by its ingestion were "unconstitutionally coerced" and, therefore, inadmissible.

The Daubert case should ease the admissibility of new technologies with strong records of reliability. Thus, as technology becomes more advanced, legal admissibility is gaining acceptance. In 2018, a New Mexico court allowed admission of a novel lie detector based on eye movements and changes in pupil size, called Eye-Detect. No human examiner who might be biased is involved, and the eye-changes are involuntary- hence cannot be controlled - although the decision of truth or falsity is based on the company’s proprietary algorithms [2], depriving the defendant of their right to cross-examine the witness, in this case, Eye-Detect. This makes it likely, that vigorous appeal will pre-empt its use on a large-scale basis. The 88-90% reliability promised by the test is a little less than most jurisdictions require, although presumably achieving the requisite 95% level of accuracy is just a matter of time.

“Oh, well we get to be more or less experts ourselves, and so do the jury, upon the question of whether anybody is telling the truth or not. That is what the jury is for.”

– The Court in Frye

The real issue here is that the determination of matters supposedly vested exclusively in a jury, the credibility of a witness, is being made by entities outside the courtroom.

But the new brain-to-computer technologies might be used to assess intent, not the credibility of the alleged culprit. Proof of intent is often extrapolated from outside factors, societal standards, or inferred from cultural mores. Here, there appears less reason to prevent the invasion of this technology into court, save the inability to cross-examine the designer (in essence, a black box). And that problem is going to increasingly affect all new technology.

However, the designers of these new technologies are more interested in addressing ethical objections based on privacy violations.

”We’re  very, very far away from just very quickly being able to mind-read anyone without their knowledge.”

-Edmund Lalor, associate professor of neuroscience,  the University of Rochester.

Alas, the non-cooperation feature is precisely what is legally objectionable. No cautionary instruction will avail a defendant who produces “gibberish .”While a jury might be cautioned against drawing conclusions from this result – it’s the same old story: A bell once rung can’t be unrung.

All the jury needs to hear is that gibberish is the product of non-cooperation.

[1] Marston was also the developer of Wonder Woman.

[2] “EyeDetect uses a statistical method called a logistic regression equation to analyze eye behaviors and reading data. … EyeDetect uses a statistical formula and gives a range of test scores.”

Sources: Minds, Models, MRIs, and Meaning