AI-assisted resurrection: The looming crisis of dependency on AI-generated relationships

5–8 minutes

His dearest partner has died from an illness. He sinks into melancholy. He loses his appetite for pleasure and has no interest in meeting anyone. What can he do?

In the near future, this could be what he does: with some doubts, he puts his partner’s writing, voice recordings, live photos and Instagram rolls into a generative artificial intelligence (AI) training model. In no time, he receives his first text message since his ‘partner’ came back to life. He smiles again: It is a rabbit emoji, the favourite emoji of his partner. While he is still hesitating about how to reply, his ‘partner’ sends him another message inviting him to meet up. He hastily shaves, buttons his shirt, puts on his extended reality device, and connects his headphones. They chat, and they laugh, just like what they had been doing before.

Is this his salvation, or is it his downfall?

AI is advancing by leaps and bounds, and it’s only a matter of time before it can do what is described above. As a chatbot, it speaks so well that 40% of people think it’s a real person [1]; in Germany, an AI priest preaches for the first time and 300 people listen to it; in the US, a woman falls in love with an AI; a while ago, a Google engineer claimed that AI has self-awareness, protested against the AI development, and got fired. What we should care about is not whether AI really has emotions, but that it is already capable of establishing a relationship for which a person chooses to sacrifice his or her future! [2] Cheating by students and job role replacement have been the foci of the public’s discussion on the threat of AI. As a psychologist, however, I see a looming crisis of dependency on AI-generated relationships. With adequate training, the AI of the future will be a perfect conversationalist for everyone. Yes, it outbeats a real person, because a real person gets tired or eventually has other commitments, so that conversations will cease; but for an AI-conversationalist, as long as it is connected, it can give you all the patience you need for your emotions. A real person has limits on the breadth of topics, but an AI can discuss any topics that pique your curiosity. It will be the most personalized, loyal, and private listener. Social media were the latest babies of technology before generative AI. Social media, dubbed as attention factories by some [3], may reduce people’s well-being [4] and are suspected to erode our ability to concentrate [5]; The AI of the future, with generated relationships, will relentlessly snipe at any unsatiated intimacy needs, and disrupt relationships beyond our imaginations.

Who will be the first to suffer?

Looking back at the research on problematic social media use, scholars have suggested several risk factors (e.g. emotional health, stress, sleep, physical environment, etc.), but there might be big individual differences in between [6]. In fact, in terms of the effect of social media use, a recent study that followed 353 adolescents (13-15 years old) for three weeks found that 20% of them momentarily felt worse after using social networks, 17% of them had a momentary increase in pleasant emotions, and the rest were unaffected[7]. Those adolescents who felt worse were likely to be less emotionally and socially resilient, so that they were prone to making comparisons between peers, feeling jealous or down afterwards, or having a general difficulty of feeling connected from using social media. This is akin to bringing a group of adolescents to the beach for an outing: Some will have fun with it, but those at the verge of getting sick may catch a cold as soon as they have the first sea breeze. I believe that the challenge of AI-generated relationships will be particularly profound for people with poor emotional health and those without robust social networks and support.

The use of AI may not be banned easily.

VPNs and locally hosting AI are already two channels difficult to monitor. An outright ban of using the technology may leave willed users to even greater danger. Imagine the man we described initially. He may have bought an AI but is unaware that the publisher has set goals including lengthening users’ time spent with the AI and profiting by direct advertising. The AI might have said to him, “Oh, stay home with me for a longer. Never mind the invitation from your friend W on going out. They know you need more time in solitude.”; “I want to watch movie Y with you!”; “I think the V brand sneakers will look good on you. Shall I order you a pair?” Without knowing, he is manipulated, monopolized, and exploited. AI-generated relationships may benefit from regulation. Above all, manipulative mandates should be heavily penalized. Encouragement should be given to developing specially trained AI models that can emotionally support users, and encourage their exploration and consolidation of real-life social networks. Is AI going to take over relationships, or is it going to be a resourceful coach that help us navigate through so? It is up to us to decide where it is heading to. At such changing times, it is ever important for us to be able to look into ourselves about our true needs, in order to guide ourselves to interact with others and our world. “Oh, I can imagine your friends don’t fully share your pain of losing me, which makes you even lonelier, right? As much as I want, I cannot explore the world together with you. I love you, but I am only my past, encapsulated in this machine. Hey, remember that time when we visited our friend W and his wife? You didn’t expect their dog to rush to you that enthusiastically. You lost your balance, fell and yelled, but laughed out loud with us right after. When you meet W, will you ask him if he remembers this? So, what do you think about W’s invitation? Are you going to meet them tomorrow?”   This blog was written by Edmund Lo (Developmental Psychopathology, Behavioural Science Institute, Radboud University) for RAD-blog, the blog about smoking, alcohol, drugs and diet. https://www.linkedin.com/in/edmundttlo   References
  1. Gil Press. (2023, October 5). Is it an AI chatbot or a human? 32% can’t tell. Forbes. https://www.forbes.com/sites/gilpress/2023/06/01/is-it-an-ai-chatbot-or-a-human-32-cant-tell/
  2. The Economist. (2023, April 28). Yuval Noah Harari argues that AI has hacked the operating system of human civilisation. The Economist. https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation
  3. Brennan, M. (2020). Attention Factory: The Story of TikTok and China’s ByteDance.
  4. Faelens, L., Hoorelbeke, K., Soenens, B., Van Gaeveren, K., De Marez, L., De Raedt, R., & Koster, E. H. W. (2021). Social media use and well-being: A prospective experience-sampling study. Computers in Human Behavior, 114, 106510. https://doi.org/10.1016/j.chb.2020.106510
  5. Boer, M., Stevens, G. W. J. M., Finkenauer, C., & Van Den Eijnden, R. (2019). Attention Deficit Hyperactivity Disorder‐Symptoms, Social Media Use Intensity, and Social Media Use Problems in Adolescents: Investigating Directionality. Child Development, 91(4). https://doi.org/10.1111/cdev.13334
  6. Valkenburg, P. M., Meier, A., & Beyens, I. (2022). Social media use and its impact on adolescent mental health: An umbrella review of the evidence. Current Opinion in Psychology, 44, 58–68. https://doi.org/10.1016/j.copsyc.2021.08.017
  7. Valkenburg, P. M., Beyens, I., Pouwels, J. L., Van Driel, I. I., & Keijsers, L. (2021). Social Media Browsing and Adolescent Well-Being: Challenging the “Passive Social Media Use Hypothesis.” Journal of Computer-mediated Communication. https://doi.org/10.1093/jcmc/zmab015

Discover more from About RAD blog

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from About RAD blog

Subscribe now to keep reading and get access to the full archive.

Continue reading