My Subconscious Made Me Do It: Legal Issues of Brain-to-Computer Interface

What if you could knock someone off merely by thinking about it – and no way to trace those thoughts to the crime? And more horrendous still: what if you could harm someone because of some subconscious desire, one that you weren’t even aware of? This isn’t the stuff of science fiction. It might even be possible now.

As I’ve written recently, a spate of devices enabling people with quadriplegia to move their extremities have been approved by the FDA.

These devices use electrodes implanted on the scalp, on the cortical surface (intracortical monitoring), or inserted within the brain to access and retrieve brain waves. The technology detects brain signals and pathways involved in the intention to move, translating them into commands that bypass neuro-muscular pathways to activate increasingly complex control of external devices. The technology is called Brain to Computer Interface or BCI

“The resulting signal features are then passed to the feature translation algorithm, which converts the features into the appropriate commands for the output device (i.e., commands that accomplish the user's intent).”  [1]

The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. The external devices might be exoskeletons or cursors, robotic arms or prostheses, even wheelchairs or drones. In principle, any type of brain signal could empower a BCI; the ones currently explored are those signaling intentions.  

FDA Regulation

One recently approved device enabled stroke victims to move a robotic arm assisting hand, wrist, and arm movement. The FDA streamlined approval of the device under the Breakthrough Device program, which speeds up “development, assessment, and review while preserving the statutory standards for premarket approval, 510(k) clearance, incorporating De Novo marketing authorization.” 

The De Novo regulatory pathway is a premarket review for low- to moderate-risk devices of a new type. As yet, there are no regulations attached to it. However, the FDA is promising that unique controls will be enacted for devices of this type to provide reasonable assurance of safety and effectiveness for these devices. 

“If a medical device maker followed FDA requirements, it generally can’t be sued for something that would conflict with those requirements.” 

The type of FDA regulation is critical to assure safety, reduce product failure, and minimize the likelihood of harm. The tort system often acts as a fail-safe for failing or defective products under product liability or general negligence doctrines. But when a drug or device fails, a direct suit against a manufacturer is frequently unavailable under the pre-emption doctrine – which protects manufacturers who comply with the FDA regulations.

The pre-emption doctrine elevates the importance of the extent of FDA review in assuring reasonable product safety. A detailed risk-benefit analysis schema determines the level of assessment. Because BCI devices are regarded as presenting low to moderate risks, the looser, breakthrough, and de novo rules were utilized in their clearance. 

Having practiced product liability law for many years, predicting what and how things can go wrong comes naturally to me. For example, it is not too far-fetched to consider that someone might move a robotic arm or finger unintentionally or direct a wheelchair backward when it was intended to go forwards. 

How much harm could those misfires cause – weighted against the considerable benefits to the disabled person? Could we all agree the likelihood of harm, at least for now, is relatively low and the gravity of harm moderate? In this instance, the benefits would be so high as to outweigh the dangers. 

Creepy Charly and Auntie Maim

But consider: what if a toddler, we’ll call him Creepy Charly, is crawling behind the wheelchair of his disabled aunt (we’ll call her Maim). Instead of going forward as Auntie Maim wishes, the wheelchair reverses due to some product malfunction and horribly injures the child?  (We’ve seen similar sorts of situations in automobile brake failure cases).

This could occur for a variety of reasons: 

  • Hardware failure – i.e., the connection to the wheelchair isn’t configured correctly or becomes loose or damaged 
  • Electrode misplacement – i.e., surgical error in implanting the electrode (in which case the surgeon might be liable) 
  • A defective electrode
  • Perhaps more difficult to assess but quite worrisome, a software error in the algorithm resulting in mistranslating the received brain electrical signal into an inappropriate command 

Usually the manufacturer would be shielded from tort liability for an FDA-approved medical device, although the surgeon might well be on the hook.

What about intention?

The FDA’s risk assessment does not seem to consider how intent to act, and decision-making actually occurs, however. 

Let’s assume Auntie Maim really dislikes her pesky nephew and, for a second, harbors some malevolent thought about running him over. Of course, she would never voluntarily intend to do this, and her “free will” surely will override any wicked thoughts about harming her nephew. Instead, likely she would reorient her thoughts and make a decision to move her chair forward – which then becomes her intent -- rather than the reverse direction

One problem. 

The brain’s wheels start turning before the person even consciously intends to do something. Suddenly, people’s choices—even a basic finger tap—appeared to be determined by something outside of their own perceived volition.”

According to Nobel Laureate and physicist Roger Penrose’s description [2] of research by Libet and Kornhuber, there is a clear indication of electrical activity in the brain to act a full second before the subject is conscious that they actually decided to do so – what might be called a subconscious effort.

This early electrical signal, the “readiness potential,” or  Bereitschaftspotential suggests the brain prepares for movement even before we become aware of i. The time between electrical activity and our awareness is long, about a second; perhaps not so coincidentally, it’s the same as a driver’s reaction time for braking. 

Likely, many of us have imagined movements we would like to perform -- but which our better nature rejects and our conscious brain overrules. But would an electrode tapping into such thoughts be clever enough to discern between those subconscious or imaginary thoughts that one would consciously reject after internal deliberation before deciding to effectuate them?

“Simply because the Bereitschaftspotential, can be measured before the conscious decision to move doesn’t mean this process is responsible for that movement…. decisions are not made when a Bereitschaftspotential starts, but rather when it crosses a threshold which triggers movement.” What we don’t know is whether our better natures can override this preparatory signal - should its design be untoward. 

What if the algorithm amplifies the signal and activates the movement before it crosses any natural threshold of decision-making awareness? 

And that brings us back to Auntie Maim, sitting in her wheelchair, which has just gone into reverse “running over” nephew Creepy Charlie. Auntie Maim could very well swear she didn’t “intend” to hurt her nephew – indeed, she didn’t even realize she was about to do it. Or maybe she did intend to do it – subconsciously, or maybe even consciously. There would be no way to prove it and establish that the fault lay with her. The manufacturer, however, having secured FDA approval, could be protected from liability under product liability - even if the pre-emption doctrine didn't apply to it.

The damage from a wheelchair run amok might be limited, but other devices utilizing BCI might be potentially more lethal than a wheelchair [3], say a drone, or the trigger activating an incendiary device. 

To be fair, one 2012 study says that Libet’s study just measured artifactual noise. A more recent study argues that the readiness potential is an artifact of breathing. This is not settled science, and the debate continues. But it would certainly be prudent to be careful in assessing these devices. And it wouldn’t be a bad idea to affix a warning, providing cautionary information about coupling intentionality only with “positive thinking.”

Was the FDA aware of the Bereitschaftspotential effect when it cleared the device and thus classified the device more stringently? That is unknown. Should they have been? That, dear reader, is the subject of another essay.

 

[1] Brain-Computer Interfaces in Medicine Mayo Clinic Proceedings DOI: 10.1016/j.mayocp.2011.12.008

[2] Roger Penrose, The Large, the Small and the Human Mind ISBN-10:0521785723 

[3] Tali Sharot, The Influential Mind ISBN-10:1627792651

Slight edits have been made by the author since publication to improve readability