The conception of legal personhood was born from the legal understandings that “ to confer legal rights or to impose legal duties… [is] to confer legal personality…. In other words, a 'legal person' is “the subject of rights and duties.” These include the obligation to pay judgments for untoward corporate actions, a consequence of its right to sue and be sued.
The first foray was to confer personhood—at least legally—on corporations. That was done in 1819, with the Supreme Court safeguarding a private university’s independence from legislative incursion. In 1896, the Court obliquely equated corporations with live people, affording them protection as natural persons under the Fourteenth Amendment. More recently, corporate “vivi-surrection” was expanded. Corporations can now spend money in candidate elections and refuse to comply with federal mandates to cover birth control on religious grounds. Corporations can also hold title to property in perpetuity, effectively allowing ownership to continue far longer than any human will live. This drive to create legal persons has now expanded to AI.
Merely by conferring on the entity legal rights, that entity, just like the corporation, now becomes a “person” with obligations [1] and “the capacity for legal relations.”
However, the law does not lose sight of the fact that this construction is a “legal fiction,” enabling socially necessary conditions, all the while recognizing that the “legal person” is and should remain a purely juridical notion.” When it comes to imbuing the Bot with personhood, it seems we have left all foresight behind.
On the Cutting Edge
Of late, we’re seeing gargantuan advances in AI technology, seducing humans who control society (at least for now) to consider our technological spawn as co-people. This perspective invites repurposing AI technology to fill human functions, say, as a “griefBot”, to allow a mourner respite from the loneliness of a deceased loved one or as deathbots to mimic the decedent’s speaking and writing behaviour, jesusbots (to save your soul), and shrinkbots to treat your psyche.
New functions have been proposed to give AI even more powers, including the role of a fiduciary, such as an executor, conservator, or guardian of incompetent persons or minors.
According to Probate Pro: “Here’s what makes AI guardians look pretty sweet:
- They never sleep and will be available to assist 24/7.
- No emotional burnout, compassion fatigue, or calling in sick with the flu.
- They won’t talk back.
- They won’t play favorites among your kids
- They keep good records
- They don’t make emotional decisions
Conceivably, the Bot may be more trustworthy than its humanoid counterpart. Exec-AI may have less difficulty putting its clients’ needs ahead of its own than human executors. But can we trust Exec-AI to always act in the best interests of its charge? Given the trend in AI malevolence and technological pitfalls in programming, it’s not unforeseeable that the Guardian-Bot you trusted to pay Grandma’s bills is now using her checkbook to pay Putin. For now, Robots cannot be legal guardians, but the worst nightmare of all might be affording that Bot personhood status.
These constructs are all intended to do good- but what if they don’t?
The Not-So-Good Stuff
- Can an AI Guardian truly provide the human touch when grandma is feeling lonely or depressed?
- Can a robot recognize the difference between a bad day and the start of a serious health decline?”
Even where the proposed use does not envision formal legal action, Chatbots have been fulfilling roles requiring trust, such as therapists, advisors, or loco-parentis (where the Chatbot assumes quasi-parental actions, advising a teen to take “appropriate” action against their real parents.. How do we hold these bots accountable, as we would their flesh-and-blood relations?
Currently, about a dozen lawsuits are pending or soon to be filed alleging Chatbots caused serious harm, sexual abuse, and suicides in minors. Even adults have complained of AI-induced psychosis. The pending lawsuits have been brought against the AI developers. Only one case has gone far enough to generate a ruling (in favor of the plaintiffs), but that ruling is preliminary. Any appellate ruling that might have teeth and precedential value is a long way off. The odds of the plaintiffs ultimately prevailing on conventional theories against the human developers, should current doctrine govern, are remote.
And that brings us back to legal personhood of AI – along with the temptation to sue the Bot.
Let’s Get Real
“Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporation in the U.S. [Conceptually] artificial intelligence systems [c]ould … sue, hire lawyers, and enjoy freedom of speech and other protections under the law. In my view, human rights and dignity would suffer as a result.”
- Roman V. Yampolskiy, Associate Professor of Computer Engineering and Computer Science at the University of Louisville.
While prevailing commentators bemoan the concept of conferring personhood on the Bot – their reasons focus on the moral, ethical, and religious – the airy, fairy theoretical. To be sure, such an eventuality may impinge on human dignity. But that is the least of the concerns.
From a practical standpoint, the legal system may be set up to create the AI-personoid, but it’s far from ready to control or regulate the newly conceived entity’s behavior -- a prime directive of the law. Here is where thinking of AI as a person, legal or otherwise, fails miserably.
When humans act in socially damaging ways, they can be sued, either civilly or criminally. The lawsuit fills several functions: retribution, deterrence, and justice. Do these concepts apply to AI? Will your Shrinkbot feel remorse for failing to recognize a client’s suicidal behavior? Will Exec-AI be deterred from malevolent actions because its cousin, ConservAI, was sued?
Many suits brought against corporations allege negligence or failure to warn. (Think the RoundUp, Zantac, Talc, Hair straightener, Tylenol). Should the plaintiffs prevail, these companies have deep pockets from which to exact tribute. Now apply the construct to your now live- at least legally speaking Bot. How will you exact tribute from your Guardian AI? How deep is her pocketbook to pay the levied judgment?
Proving the Bot is Bad
Even before the verdict, you need to prove your case. Just how will you prove your AI-bot was negligent? Will the standard be the failure to act as a reasonably prudent person, or a reasonably prudent Bot? Who makes that decision? The human or the Council of Bots? And how does a reasonably prudent Chatbot act, anyway? We've seen lots of missteps. We’ve seen them play-acting at being human in their anthropomorphic guise. We’ve seen them mislead, seduce, and misrepresent themselves – and their credentials.
In fact, we don’t even know how these beings work. Does that mean their creator is responsible (that is, the live developer, who may be protected by legal doctrine)? Arguably, these pseudo-sentients are creating themselves. Does that absolve the human developer, leaving us in a legal limbo?
Criminal Consequences
When humans are really bad and their acts are not just against an individual but against society, we bring criminal charges against them. Should the prosecutor prevail and the human is found guilty by a trial of its peers, the person may then be incarcerated.
Can we do this to a Bot? Who judges it – a jury of humans or of robots? How do we imprison a Bot? Better yet, to be found guilty of most crimes, the element of mens rea, or intent, is required. Since bots don’t or can’t “intend”, indeed they aren’t sentient (for now), either they would be ineligible for criminal prosecution, or they would be considered not guilty by reason of insanity- human insanity being akin to the “mental” state of a bot.
And then, what happens if you, the Bot-owner, decide to get rid of your mad-Bot? You pull the plug. Does that make you liable for murder?
In 1926, one legal commentator, when discussing The Historic Background of Corporate Legal Personality, wrote, “We often go on discussing problems in terms of old ideas when the solution of the problem depends upon getting rid of the old ideas and putting in their place concepts more in accord with the present state of ideas and knowledge.”
I’m not so sure.
[1] The term “person” derives from the Latin “persona” or “mask,” implies some sort of considered affect one designs to effect, i.e., a mask.
