AI in healthcare

As developers and health systems embrace artificial intelligence-powered software, a pressing question emerges: Who bears the burden when these innovations inadvertently harm patients? And especially when legal precedent offers only faint guidance. Let's take a look.
Everyone is using, embedding – or about to use and embed – Artificial Intelligence in their work. I am not so concerned about the imminent arrival of SkyNet; those bits and pieces are already in place. What concerns me more is that A.I., already a misnomer, will increasingly become real stupidity and hurt patients along the way.
Artificial Intelligence, or AI, including those large language systems (e.g., ChatGPT), is gaining much traction. When “teaching for the test,” one system passed the U.S. Medical License Exam – a three-component test that's required in order to earn an MD degree. Will doctors be among the first white-collar (white-coat?) workers to be replaced by automation?