Illustration of a ghostly face emanating from a phone

Illustration: Daniel Zender

The Strange New World of Chatting with the Dead

“Deadbots” promise to comfort those in mourning by using AI to replicate loved ones who’ve passed away. But what do they tell us about the nature of life, death, and the digital in-between? 

Can humankind survive immortality? 

That’s the grave and paradoxical question I asked myself last fall. I was walking out of a screening of the Oscar-nominated film Frankenstein, the latest of many adaptations of Mary Shelley’s 1818 novel, which is often credited as the first literary work of science fiction. Shelley, who wrote her novel while grieving the deaths of her infant child and half sister, authored a cautionary tale about using techno-wizardry to bring the dead back to life. Her protagonist, the hubristic scientist Victor Frankenstein, intends to perform a miracle and instead creates a monster.  

While watching the science-fiction movie, I found myself thinking of several articles I’d recently read in publications such as The New York Times, Nature, and Financial Times about a very real new phenomenon that involves resurrecting the dead in digital form. There are, I’d learned, a growing number of online services that allow people to create an artificial intelligence–generated avatar of a late loved one that they can continue to talk to and interact with. For instance, the mobile app 2wai bills itself as “the world’s first social app for AI avatars of real humans.” The company recently released a commercial in which a pregnant woman shows off her baby bump to an AI version of her dead mother. “Put your hand on your tummy and hum to him,” the computerized clone gently advises the daughter in the ad. “You used to love that.” Later, after the daughter has given birth, she cries a single tear while her “mother,” appearing on her smartphone, tells the infant a bedtime story. 

These computerized incarnations are variably referred to in the press as “deadbots,” “griefbots,” and “digital immortals,” and they’re a big part of what’s known as the digital legacy industry. According to some market research, that industry is expected to quadruple to $80 billion over the next ten years. Already, though, any person at home can conjure a deadbot version of a loved one. To create the facsimiles, users simply provide the AI service with artifacts of a deceased person—such as voice recordings or videos—in order to generate a digital twin that acts and sounds like them. These deadbots are then available to provide communication and comfort to the bereaved on demand. (It’s also possible to commission your own deadbot before you die.)  

So what does the growing popularity of deadbots reveal about the nature of grief and what it means to be human? To find out, I decided to talk to some experts from Boston College who could shine some light on things. 

First stop: the Stokes Hall office of Dr. Andrea Vicini, SJ, chair of the BC Theology Department, which recently announced a new interdisciplinary minor—Theology, Science, and Technology—to help students think critically about the virtues, moral limitations, and humanistic dimensions of different scientific and technological advances. Vicini, the Michael P. Walsh Professor of Bioethics and Professor Ordinarius, told me he worried that deadbots generally debase what it means to be human. “What does it say about the way we think about ourselves?” he asked. “Are we simply a series of ones and zeroes? Are we really of such little value?” Most people are creating deadbots of loved ones to try to ease the grief of losing them. But Vicini said that grief, while certainly very painful, is also beautiful and important—something to endure, not avoid. He said that grief reminds people of the great depth of love of which humans are capable, and is critical to cultivating empathy with others. “When we grieve we are vulnerable, and in that vulnerability we find moments of connection with one another that otherwise don’t happen often,” Vicini said. “That makes life rich. But this artificial intelligence proposes that we avoid that and replace it with something less authentic.”

Using AI in this way could actually prolong grief, according to BC Psychology Professor Elizabeth Kensinger, an expert on the neuroscience of memory. Talking to the deadbot version of a loved one might provide someone in mourning with “momentary relief,” Kensinger said, but it could ultimately impede their ability to adapt to reality. “Grief is your brain getting a prediction error: this person is supposed to be there, but they’re not,” she said. “It’s a learning process. With a deadbot, the brain is no longer getting that.” 

Kensinger said that interactions with deadbots might also have the effect of warping the very memories of a loved one that the user wants to cherish. As Kensinger explained it, our brains are constantly updating memories with new information, and most of the time that’s useful. It’s how we’re able to recognize an old friend with a new hairstyle, for instance. In the case of chatting with a deadbot, though, it’s a digital mimic of a loved one, and not the actual person, that will provide the brain with new information to layer on to old memories. The brain, Kensinger said, “is not going to be able to help itself” from conflating the two sources. “It’s similar enough that there will be a bleeding effect,” she said. “You’re kind of overwriting your actual memories of the person.”

What’s more, these new, deadbot-generated “memories” of a loved one may not actually be all that representative of the person. Why not? According to Associate Professor of Computer Science Sergio Alvarez, these kinds of AI bots fall into a category referred to as “generative pre-trained models.” In order to engage in human-like conversation, Alvarez explained, these models are typically pre-trained on “a mass of generic information about people, and about the world.” That gives the bot some background on how to converse like an average person might. When an individual generates a deadbot, they upload effects from the deceased in order to provide the AI model with details that are unique to that person. But in order to hold infinite conversations about anything and everything, the deadbot will most likely also draw from the first pool of generic information in order to teach itself how to respond.

In other words, everything a deadbot says is likely stitched together using scraps of material about the individual, but also about the broader population. Alvarez likened the deadbot’s output to a sophisticated roll of the dice. “In the context of grief, I worry about the intrinsic randomness of this process,” said Alvarez, who is overseeing BC’s newly announced doctoral program in computer science, in which scholars will grapple with the technical and ethical challenges posed by AI. “You run the risk of the deadbot generating something the actual person may never have said, or something that sounds generic—neutral-sounding, like a person on the news. If that was my loved one, I would find that a little bit disturbing.” 

Under the right circumstances, however, deadbots can be an effective instrument for managing grief, said Kelly Kane, a visiting lecturer in the psychology department. “I’m in favor of the concept, as long as there’s transparency and everybody understands what’s happening: that this is a computer program making educated guesses,” Kane said. She suggested that deadbots could be a tool for getting through grief if used judiciously and with support from a professional therapist or a grief group. Those outside human perspectives could be essential to preventing someone in mourning from developing an unhealthy relationship with a deadbot, from becoming too attached or taking bad advice. “We know that people are having almost intimate relationships with bots,” Kane said. “What happens when the bot is mom telling you what to do?”  

That might be a scarier thought than anything in Frankenstein. “I wonder about people’s ability today to understand the boundaries between realities,” said William Griffith, associate professor of the practice in the computer science department. Consider, he said, recent headlines about young people ending their lives after bots either encouraged them to do so or failed to intervene. Griffith, a clinical psychologist who holds a doctorate in philosophy, sees digital immortality as just another way that human beings try to quell their anxieties about the reality of physical death. At the end of the day, though, “death is something we can’t control,” he said. “Grief makes you more conscious of the beauty of the moment. I’m reminded of a poet who said that the secret to life is to love what is mortal with all your heart—and when it’s time to let it go, let it go.” 

I suspect that Mary Shelley would have shared that sentiment. After all, she didn’t choose to write Frankenstein as a hopeful tale about living forever with the help of science. In trying to cheat death, her monster’s creator makes something even more dreadful: a soulless imitation of life. I wonder, who wants their legacy to be remembered like that? ◽

Back To Top