This theoretical reflection paper explores critical ethical challenges for youths’ use of conversational artificial intelligence (CAI), highlighting promises and pitfalls. Central to the discussion is the challenge of developing ethical AI systems to make morally sound decisions to minimize harm and maximize beneficence. To address ethical concerns and safeguard youth-AI interactions, innovative solutions are highlighted: developing computational ethics paradigms to ensure transparency and accountability in AI algorithms and promoting communities of AI use. The paper concludes by underscoring the ongoing challenge of imbuing AI with ethical reasoning capacities, highlighting the critical need for interdisciplinary approaches to ensure responsible AI development and use by younger and older humans alike.
References
Administration for Children & Families, Office of Family Assistance, & Chamberlain, S. (2024). Promoting Healthy Youth Relationships. Webinar Attended December 12, 2024.
Asimov, I. (1942). Runaround. Astounding Science Fiction, 29(1), 94-103.
Awad, E., Levine, S., Anderson, M., Anderson, S. L., Conitzer, V., Crockett, M. J., Everett, J. A. C., Evgeniou, T., Gopnik, A., Jamison, J. C., Kim, T. W., Liao, S. M., Meyer, M. N., Mikhail, J., Opoku-Agyemang, K., Borg, J. S., Schroeder, J., Sinnott-Armstrong, W., Slavkovik, M., & Tenenbaum, J. B. (2022). Computational ethics. Trends in Cognitive Sciences, 26(5), 388-405. Doi: 10.1016/j.tics.2022.02.009.
Bowen, J. A. & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. Johns Hopkins University.
Bragazzi, N. L., Crapanzano, A., Converti, M., Zerbetto, R., & Khamisy-Farah, R. (2023). The impact of generative conversational AI on the lesbian, gay, bisexual, transgender, and queer community: Scoping review. Journal of Medical Internet Research, 25, e52091. Doi: 10.2196/52091.
Bridge, O., Raper, R., Strong, N., & Nugent, S. E. (2021). Modelling a socialised chatbot using trust development in children: Lessons learnt from Tay. Cognitive Computation and Systems, 3, 101-108. Doi: 10.1049/ccs2.12019.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, January-June, 1-12. Doi. 10.1177/ 2053951715622512.
Crain, W. (2014). Theories of Development: Concepts and Applications. (6th Ed.) Prentice Hall.
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
De Freitas, J., Castelo, N., Uğuralp, A. K., & Uğuralp, Z. (2024). Lessons from an app update at Replika AI: Identity discontinuity in human-AI relationships. Harvard Business Working Paper No. 25-018. Doi: 10.2139/ssrn.4976449.
Erikson, E. H. (1968). Identity: Youth and Crisis. W. W. Norton & Company.
Espelage, D. L. (2014). Ecological theory: Preventing youth bullying, aggression, and victimization. Theory Into Practice, 53, 257-264. Doi: 10.1080/00405841.2014. 947216.
Fisher, M. (2022). The chaos machine: The inside story of how social media rewired our minds and our world. Little, Brown and Company.
Guerreiro, J., & Loureiro, S. M. C. (2023). I am attracted to my cool smart assistant! Analyzing attachment-aversion in AI-human relationships. Journal of Business Research, 161, 113863. Doi: 10.1016/j.jbusres.2023.113863.
Kumar, S. & Choudhury, S. (2023). Cognitive morality and Artificial Intelligence (AI): A proposed classification of AI systems using Kohlberg’s theory of cognitive ethics. Technological Sustainability. Doi: 10.2139/ssrn.4293968.
Leaver, T. (2015). Born Digital? Presence, Privacy, and Intimate Surveillance. In Hartley, J. & W. Qu (Eds.), Re-Orientation: Translingual Transcultural Transmedia. Studies in narrative, language, identity, and knowledge (pp. 149-160). Fudan University Press.
Lee, B., Ku, S., & Ko, K. (2025). AI robots promote South Korean preschoolers’ AI literacy and computational thinking. Family Relations, 74(3), 1354-1375. Doi: 10.1111/ fare.13189.
Lee, H. Y., Kim, I., & Kim, J. (2024). Adolescents’ Mental Health Concerns in Pre and During COVID 19: Roles of Adverse Childhood Experiences and Emotional Resilience. Child Psychiatry & Human Development, 1-12. Doi: 10.1007/s10578-024-01726-x.
Liu, Y., Mittal, A., Yang, D., & Bruckman, A. (2022). Will AI console me when I lose my pet? Understanding perceptions of AI-mediated email writing. CHI Conference on Human Factors in Computing Systems, Doi: 10.1145/3491102.3517731.
Mansoor, M., Hamide, A., & Tran, T. (2025). Conversational AI in pediatric mental health: A narrative review. Children, 12, 359-381. Doi: 10.3390/children12030359.
Martens, M., Abeel, M. V., & De Wolf, R. (2025). Home maintainer, guardian or companion? Three commentaries on the implications of domestic AI in the household. Family Relations, 74(3), 1098-1108. Doi: 10.1111/fare.13162.
McDaniel, B. T., Coupe, A., Weston, A., & Pater, J. A. (2025). Emerging Ideas. A brief commentary on human-AI attachment and possible impacts on family dynamics. Family Relations, 74(3), 1072-1079. Doi: 10.1111/fare.13188.
Montreuil, M., Gendron‐Cloutier, L., Laberge‐Perrault, E., Piché, G., Genest, C., Rassy, J., Malboeuf‐Hurtubise, C., Gilbert, E., Bogossian, A., Camden, C., Mastine, T., & Barbo, C. (2023). Children’s and adolescents’ mental health during the COVID-19 pandemic: A qualitative study of their experiences. Journal of Child & Adolescent Psychiatric Nursing, 36(2), 65-74. Doi: 10.1111/jcap.12404.
Moore, J., Grabb, D., Agnew, W., Klyman, K., Chancellor, S., Ong, D. C., & Haber, N. (April, 2025). Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. International Conference on Learning Representations. Singapore.
Oh, J., Kim, M., Rhee, S. Y., Rahmati, M., Koyanagi, A. Smith, L., Kim, M. S., Fond, G., Boyer, L., Kim, S., Shin, J. I., & Yon, D. K. (2024). National trends in the prevalence of screen time and its association with biopsychosocial risk factors among Korean adoles-cents, 2008-2021. Journal of Adolescent Health, 74(3), 504-513. Doi: 10.1016/j.jadohealth.2023.10.021.
Ozer, E., Abraczinskas, M., Duarte, C., Mathur, R., Ballard, P. J., Gibbs, L., Olivas, E. T., Bewa, M. J., & Afifi, R, (2020). Youth participatory approaches and health equity: Conceptualization and integrative review. American Journal of Community Psychology, 66, 267-278. Doi: 10.1002/ajcp.12451.
Pepper, S. C. (1942). World hypotheses: A study in evidence. University of California Press.
Ramadan, Z., Farah, M. F., & El Essrawi, L. (2020). From Amazon.com to Amazon.love: How Alexa is redefining companionship and interdependence for people with special needs. Psychology of Marketing, 38, 596-609. Doi: 10.1002/mar.21441.
Smakman, M., Vogt, P., & Konijn, E. A. (2021). Moral considerations on social robots in education: A multi-stakeholder perspective. Computers and Education, 174, 1-14. Doi: 10.1016/j.compedu.2021.104317.
Song, X., Xu, B., & Zhao, Z. (2022). Can people experience romantic love for AI? An empirical study of intelligent assistants. Information & Management, 59, (2022), 1-10. Doi: 10.1016/j.im.2022.103595.
Strasburger, V. C., & Wilson, B. J. (2002). Children, Adolescents & the Media. Sage Publications.
Sullivan, H. S. (1953). The interpersonal theory of Psychiatry. Routledge.
Thomas, R. M. (2005). Comparing Theories of Child Development: Wadsworth.
Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: Thematic analysis. Journal of Medical Internet Research, 22(3), e16235, 1-10. Doi: 10.2196/16235.
Treiman, L., Ho, C. J., & Kool, W. (2024). The consequences of AI training on human decision-making. Psychological & Cognitive Sciences, 121(33), 1-12. Doi: 10.1073/ pnas.2408731121.
Tudge, J. R. H., Payir, A., Mercon-Vargas, E., Cao, H., Liang, Y., Li, J., & O’Brien, L. (2016). Still misused after all these years? A reevaluation of the uses of Bronfenbrenner’s bioecological theory of human development. Journal of Family Theory & Review, 8, (December 2016), 427-445. Doi: 10.1111/jftr.12165.
Turkle, S. (2024, March 27). Who do we become when we talk to machines? An MIT exploration of generative AI. Doi: 10.21428/e4baedd9.caa10d84.
Woodside, M. R. & McClam, T. (2019). An Introduction to Human Services, 9th Edition. Cengage Publishing.
Valz, D. (2023). Personalization: Why the relational modes between generative AI Chatbots and human users are critical factors for product design and safety [Preprint]. Doi: 10.2139/ssrn.4468899.
Wald, R., Piotrowski, J. T., Araujo, T., & van Oosten, J. M. F. (2023). Virtual assistants in the family home. Understanding parents’ motivations to use virtual assistants with their children. Computers in Human Behavior, 139, 1-12. Doi: 10.1016/ j.chb.2022.107526.
Williams, M., Carroll, M., Narang, A., Weisser, C., Murphy, B., & Dragan, A. (2025). On targeted manipulation and deception when optimizing LLMs for user feedback. Interna-tional Conference on Learning Representations. Singapore.
Xu, Y., Thomas, T., Yu, C. L., & Pan, E. Z. (2025). What makes children perceive or not perceive minds in generative AI? Computers in Human Behavior: Artificial Humans, 4(2025), 100135. Doi: 10.1016/j.chbah.2025.100135.
Zhang, S., Li, J., Cagiltay, B., Kirkorian, H., Mutlu, B., & Fawaz, K. (2025). A qualitative exploration of parents and their children’s uses and gratifications of ChatGPT. Family Relations, 74, 1056-1071. Doi: 10.1111/fare.13171.
Zhou, H., Wu, X., & Yu, L. (2023). The comforting companion: Using AI to bring loved one’s voices to newborns, infants, and unconscious patients in ICU. Critical Care, 27(1), 135. Doi: 10.1186/s13054-023-04418-5.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs Publishing.