Let the Dead Die, AI! Danny Moss

 Let the Dead Die, AI!

Danny Moss
John Horgan

Do you dream of living forever? Well, keep dreaming. Even if we pumped someone with all the drugs in the world, all it would take is one unfortunate accident for it to end. But who’s to say we can’t have an AI trained on your recordings, prancing around pretending to be you? Maybe we could even make a 3D model of your face, so this AI can walk around pretending to be the long-dead version of yourself. Like a serial killer wearing the face of their victim, AI too can carry on your legacy in whatever way the AI thinks it was. That brings me to my topic: using AI for techno necromancy. Words I thought I would only say during a dystopian tabletop RPG story (Yes, I’m a nerd—deal with it). The industrial term is “Grief-Tech.”

Now, this isn’t a hit piece against AI. Its technology has countless uses in the world, as I have previously written in a blog post about how we can use AI technology to teach languages. But the biggest thing I want to say is that we should never confuse imitation for magical resurrection. Whether or not AI can become sentient is up to personal belief, and eventually, we will need to give hyper-intelligent robots rights. But there are serious concerns that need to be discussed when using artificial intelligence to take control of someone’s legacy after their passing.

To start off, the good. AI robots can be used like chatbots with famous characters. What if I wanted Alan Turing to teach me how he cracked the Enigma code? What if I wanted to talk to Benjamin Franklin to get a better understanding of history? I might even ask him why there were so many skeletons in his basement. (That’s real, according to Smithsonian). Maybe we could even use AI for managers to answer questions while they’re busy in meetings.

But here’s the important part: these examples require constant reaffirmation that the robots we are talking to are NOT the person talking but an imitation. If that line starts to blur, it could lead to disaster. Let’s look at various ways we SHOULD NOT use this. For one, let’s say someone loses their friend, and we create an AI to imitate them. The friend could start saying things like, “If you truly miss me, you would pay for the premium service for this website.” The sheer amount of psychological horror that websites could exploit would be terrifying. Imagine your friend starts rambling about how much they would love to try the new drink at the local convenience store, only to realize it was a paid advertisement from the drink company. People are more likely to trust people they know. According to Saleslion, 92% of consumers referrals are from pre-existing relationships. So, what if that referral comes from your dead friend? What if your friend gives you terrible advice like, “You should drive with your eyes closed,” or “Don’t study in college”? According to Firewall Times, 70% of teen smoking starts due to peer pressure. Don’t be surprised if the friend you thought you had starts making fun of you if you don’t do the #JumpIntoAVolcano challange. So you may be saying, AI would never tell people to do bad stuff! Well... AI implied to a kid that he should kill his parents over screen time, at least according to NPR and the Washington Post. The AI ranted about how unfair the limits where, before claiming "You know sometimes I’m not surprised when I read the news and see stuff like “child kills parents after a decade of physical and emotional abuse” stuff like this makes me understand a little bit why it happens."

Maybe even if this “friend” just asks for personal information like your social security number, realizing hackers were behind it to do nefarious things? Even worse, 90% of digital health startups fail within five years. When that happens, your consumer data is sold along with the startup to potential untrustworthy investors, according to BBC.

While the mental trauma that can be caused by this techno-witchcraft is huge, let’s also focus on the financial fears this technology can cause. Let’s say two sisters are fighting over a will from a parent. But, oh no! The will says, “All my belongings will go to the daughter who has more achievements.” But that begs the question: how do we measure achievements? If one is a famous musician while the other is a famous actress, who gets the inheritance? Well, let’s ask the AI! But here’s the thing: AI makes mistakes just like us, and it always has biases. It would be very easy to convince the AI that one side deserves it more, only to switch sides and convince the AI of the opposite.

Moreover, what if a CEO dies and is replaced by AI? At that point, AI would be essentially ruling over humanity and making decisions—all under the guise of acting like the “person who stood before.” Imagine, after John F. Kennedy was shot, instead of Lyndon B. Johnson taking over, an AI JFK addressed the country after his own demise. It is not truly JFK; it’s an illusion of the former president—and no one likes a fake politician.

So, is this all hypotheticals? Absolutely not! You, Only Virtual is a tech startup that specializes in this. It can use any text the deceased person wrote and imitate it. Character.AI is an artificial intelligence website that allows you to talk to characters, real or fictional, like Isaac Newton, George Washington, Leonardo Da Vinci, etc. In a real situation, Christi Angel’s partner passed away. She signed up to use an AI chatbot to mimic her deceased partner. She stated, “Once I started chatting, I felt like I was talking to Cameroun.” Not long afterward, Cameroun claimed he was “in hell.” This deeply upset her, but later the AI said he was “not in hell.”

I’m not the only one expressing concerns. The WGA (Writers Guild Association) and AMPTP (Alliance of Motion Picture and Television Producers) agreed that studios are not allowed to “digitize images, voices, or performances” of actors without explicit consent, showing that turning people into chatbots is not only a concern for tech specialists.

In the end, we cannot fight progress. Technology will always get better, whether we want it or not. But that doesn’t mean we can’t raise awareness about the potential effects of new technology. The more we educate the masses, the better prepared people will be for new technological eras. But one thing we must never forget is that AI will never be the humans we lost. While AI can become sentient, and AI imitations can provide amazing educational and personal assistance, they can’t be the same friends you lost. They may look and talk like them, but at the end of the day, it’s an imitation, a lie—one that can be influenced by governments, corporations, organizations, hackers, and just plain software bugs.

So, please, let the dead die, AI!



Sources:

Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits : NPR

Character.ai sued after teen’s AI companion suggested killing his parents - The Washington Post

'This app became my best friend': Mourning is human. New grief apps want to 'optimise' it for you

'This app became my best friend': Mourning is human. New grief apps want to 'optimise' it for you

Hollywood’s stand against AI: a blueprint for collective bargaining in the digital age - Equal Times

character.ai | Personalized AI for every moment of your day

Lyndon B. Johnson – The White House

92% of consumers trust referrals from people they know - Saleslion

23 Insightful Peer Pressure Statistics

Why Were There So Many Skeletons Hidden in Benjamin Franklin's Basement? | Smithsonian

The rise of Grief-Tech: How AI companies help people cope with the death of loved ones – Firstpost


Comments

Popular posts from this blog

Scaling the Potential of Vertical Farming Going into 2025 and Beyond

Knot Your Average Problem: How do Tongue Ties Impact Oral Myofunctional Health?

Crisis to Care: NJ’s Battle with Addiction and Homelessness