Welcome to the Age of Chatbots (aka, The Information Hellscape)

Chatbots – trainable software applications capable of conducting intelligent, informed conversations with users – have tremendous potential for vast societal benefits but also tremendous mischief. We are at the earliest stage of the learning curve.

Artificial Intelligence (AI) is increasingly becoming the focus of societal debate and discussion following the emergence last November of ChatGPT (Chat Generative Pretrained Transformer) – a “chatbot,” an intelligent, trainable software application capable of conducting an online conversation without the participation of a human. It lets you engage with ordinary language questions or comments and then responds with conversational responses that it derives from vast amounts of information on the internet.

ChatGPT, whose knowledge base ceased updating at the end of 2021, has been expanded and improved with GPT -4, which, according to the company that created it, is “more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.”

GPT-4 also has great potential for medical diagnosis, treatment, consultations, and education. When provided with typical questions containing information about a patient's initial presentation or a summary of laboratory test results, GPT-4 can provide useful responses to help the health professional address the problem of concern.

If it occurs to you that this is eerily reminiscent of “HAL 9000,” the fictional computer in the 2001 film “2001: A Space Odyssey” that was capable of speech, speech recognition, facial recognition, lip-reading, interpreting and expressing emotions, and playing chess, we’re thinking the same thing. And then there is the speculation about chatbots producing school essays and op-ed articles, taking humans out of the equation.

ChatGPT and GPT -4 follow closely on revelations about AI technology that can create “deep fakes,” highly realistic, simulated photographic and videographic creations that are difficult to distinguish from the actual people they portray and can literally put words in their mouths.

Consider an example from March that was spurred by the possibility that a New York City grand jury would indict Donald Trump and that he would be arrested. Here is a partial account of the story, as reported in the Washington Post on March 22nd:

Eliot Higgins, the founder of the open-source investigative outlet Bellingcat, was reading this week about the expected indictment of Donald Trump when he decided he wanted to visualize it.

He turned to an AI art generator, giving the technology simple prompts, such as, “Donald Trump falling down while being arrested.” He shared the results — images of the former president surrounded by officers, their badges blurry and indistinct — on Twitter. “Making pictures of Trump getting arrested while waiting for Trump’s arrest,” he wrote.

“I was just mucking about,” Higgins said in an interview. “I thought maybe five people would retweet it.”

Two days later, his posts depicting an event that never happened have been viewed nearly 5 million times, creating a case study in the increasing sophistication of AI-generated images, the ease with which they can be deployed, and their potential to create confusion in volatile news environments.

The realism and visual impact of the images are quite extraordinary.

We are also subjected regularly and often unwittingly to the influence of internet “bots, software algorithms that represent themselves as humans in social media or accessing websites to achieve some ideological, political, commercial, or criminal purpose.

As scientifically astonishing as these technologies may be, they have a very dark side, which will become increasingly clear as they merge and interact. They will enable the creation of what amounts to digital synthetic people, under the control at least initially of the developers who created the algorithms, and, incidentally, whose personae may turn out to be somewhat unpredictable as the algorithms “learn” from new data fed to them. Related to that unpredictability is the possibility that the chatbots will be able to independently select sources of information from the internet. Will they choose to obtain medical and scientific information from the Mayo Clinic and NIH or the Church of Scientology?

Imagine sucking in thousands of hours of Rachel Maddow or Sean Hannity and using them to create replicas of broadcasters who espouse views diametrically opposite to what the actual people believe. Ms. Maddow could be shown supporting a ban on abortions and the reelection of Donald Trump, while Mr. Hannity could be seen advocating open borders and praising Elizabeth Warren. These are fanciful examples because of the likelihood of lawsuits, but faking political candidates could have its desired impact before legal action could be effective. Thus, the public would become increasingly dependent upon “honest” media distribution channels (including online platforms) that do fact-checking for at least the authenticity (if not accuracy) of content.

It is not difficult to conceive of a coming Age of Media Deception, where the willingness to suppress unwelcome information and the desire to highlight desired stories according to the bias of the media cross over the line into the manipulation of reality. The notion of objective journalism, if not already dead, is certainly in intensive care as the practitioners experience the shortening of deadlines from days into minutes or seconds and are increasingly beholden to click counts and numbers of eyeballs.

These powerful applications can be benign, like “60 Minutes” showing fabricated posthumous “interviews” of holocaust survivors, but it is certain that that will not always be the case. There is also little to stop deep fakes with content generated by GPTs from popping up from obscure or even anonymous sources, and going viral, just as fake rumors do now.

As genuinely trustworthy information sources become rarer, people stop being able to discern who or what is real. We will constantly be propagandized and misled by those in control or who can exert influence, as the “Twitter Papers” showed the CDC and FBI “strongly advising” the social media company. Another possible source is the trolls and bots of the United States’ foreign adversaries who constantly attempt to sow confusion and discontent here.

This is the dystopian underbelly of AI – George Orwell’s “1984,” arriving several decades late. Welcome to the coming media hellscape.

Andrew I. Fillat spent his career in technology venture capital and information technology companies. He is also the co-inventor of relational databases. Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. They were undergraduates together at MIT.