Immortality

Digital ‘Immortality’ on the Horizon: A Future We’re Not Prepared For

The emerging wave of AI-based “grief tech,” which includes offerings like digital avatars of deceased loved ones, is raising significant ethical questions about mourning and remembrance. While generative AI tools like ChatGPT and Midjourney are at the forefront of AI discussions, the larger ethical implications of using AI in grief-related contexts have been largely overlooked.

One company, Deepbrain AI, has already delved into digital immortality through its service Re;memory. For a hefty price of $10,000 and a few hours in a studio, users can create an avatar of themselves that their family can visit at an offsite facility. However, the current iteration only allows for basic interactions with limited personality depth.

Another service, HereAfter AI, aims to capture more of the deceased person’s personality by engaging friends and family in a series of questions. The AI chatbot provides verbal answers and stories from the past, though it still has limitations in responding to certain queries.

Beyond the current capabilities, the concern lies in the ownership and privacy of these avatars. Questions arise about who controls the avatar once the user is gone and whether personal data and memories are safe and secure. Additionally, the impact of interacting with AI-generated avatars on the grieving process and mental health is a growing concern.

Experts like Joanna Bryson, Professor of Ethics and Technology at Hertie School of Governance, compare the current wave of grief tech to the early days of Facebook when memorializing deceased friends was common. Bryson highlights the emotional impact of communicating with deceased individuals through new and immediate forms of communication.

However, Bryson also warns about the potential misuse of such technology. For example, teenagers may prefer interacting with AI syntheses of their best friend and celebrities, leading to disconnection from real-life relationships.

Furthermore, the future possibilities of AI avatars raise questions about data protection. Even with consent, personal data might be inadvertently given to AI for training, leading to the disclosure of private information posthumously.

Companies like Deepbrain AI and tech giants such as Microsoft are at the forefront of developing these technologies, but the lack of clear guidelines and regulations poses challenges in safeguarding user data and privacy.

While data protection is essential, Bryson suggests that companies like Apple, known for their privacy-oriented approach, could lead the way in creating secure AI avatar systems.

Ultimately, the development of AI avatars for the deceased forces society to confront mortality in a new way. While the technology offers a glimpse into remembrance and mourning, it also serves as a reminder to cherish real-life relationships and conversations while they are still possible.