Many young people are growing up in an age where vulnerability is increasingly visible, but closeness seems to be in short supply. Conversations about feelings no longer take place at the kitchen table, but in comment sections. Anyone who feels lonely, insecure or exhausted doesn’t have to look far: TikTok and AI are there for you. If not with relatable stories, then with diagnoses or words of comfort. It seems easier to be understood online than in real life, and that’s what makes digital self-help so appealing.
Research shows that social media can help young people better understand their emotions and feel less alone (1). Sharing personal experiences can lead to social support, a sense of community and improved well-being. In addition, other research shows that AI chatbots can offer an accessible form of support to people who find it difficult to share their feelings with a counsellor (5). Because these systems are always available and respond in a non-judgmental, empathetic way, they can reduce feelings of stress and improve psychological well-being.
However, there is a risk inherent in this digital intimacy. Algorithms reinforce emotionally recognisable content, which means that young people mainly receive confirmation of their emotions (3). Anyone who recognises themselves in one video will quickly be shown dozens of similar videos. This can reinforce existing beliefs and amplify negative feelings. Chatbots appear empathetic, but lack context and clinical insight, which can turn recognition into false assumptions or delay professional help (2). Social media can even create a “diagnostic loop” in which young people recognise themselves in content that further confirms their self-image.
The power and danger of this trend lie in the same need: connection and control. Social media and AI give young people a sense of being heard, but often without the depth and reciprocity of real contact (3). Digital self-help is not the problem, but how we deal with it. Young people seek support online because it is anonymous and accessible, but they do not always recognise what is reliable (4). Instead of condemning digital help, we need to teach young people to use it critically and consciously, with education that helps them distinguish between recognition and help, between empathy and expertise.
Maybe it is not a bad thing that help starts behind a screen, as long as it doesn’t end there. Talk about it, seek real contact and use online support as a stepping stone, not a replacement.
This blog was written by Britt Evers for the course Recent Developments in Risk Behaviour, Master PWO, 2025.
References
- Blair, J., & Abdullah, S. (2018). Supporting Constructive Mental Health Discourse in Social Media. In PervasiveHealth ’18: Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare, 299–303. https://doi.org/10.1145/3240925.3240930
- Corzine, A., & Roy, A. (2024). Inside the black mirror: current perspectives on the role of social media in mental illness self-diagnosis. Discover Psychology, 4(1). https://doi.org/10.1007/s44202-024-00152-3
- Naslund, J. A., Bondre, A., Torous, J., & Aschbrenner, K. A. (2020). Social media and mental Health: benefits, risks, and opportunities for research and practice. Journal of Technology in Behavioral Science, 5(3), 245–257. https://doi.org/10.1007/s41347-020-00134-x
- Pretorius, C., Derek Chambers, Coyle, D., School of Computer Science, University College Dublin, Dublin, Ireland, & Connecting for Life, Health Service Executive, Cork, Ireland. (2019). Young People’s Online Help-Seeking and Mental Health Difficulties: Systematic Narrative review. In J Med Internet Res (p. 1) [Journal-article]. https://doi.org/10.2196/13873
- Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and Conversational Agents in Mental Health: A review of the Psychiatric landscape. The Canadian Journal of Psychiatry, 64(7), 456–464. https://doi.org/10.1177/0706743719828977


Leave a Reply