Adolescents' Acceptance of Artificial Intelligence in Moral Dilemmas
DOI: 10.23977/appep.2025.060306 | Downloads: 6 | Views: 637
Author(s)
Zhang Yiling 1
Affiliation(s)
1 Department of Psychology, College of Education, Yanshan Campus, Guangxi Normal University, Guangxi, Guilin, China
Corresponding Author
Zhang YilingABSTRACT
As AI increasingly participates in social decision-making, adolescents' acceptance of AI decisions in moral dilemmas has become central to understanding human–machine ethics. Yet few studies have explored how adolescents respond to utilitarian versus deontological decisions made by different agents. This study used a between-subjects design with 180 middle school students, employing adapted versions of the trolley and footbridge dilemmas. Participants were randomly assigned to either an "AI decision" or "self-decision" condition, and acceptance was rated on a 6-point Likert scale. Results indicated a general adolescent preference for utilitarian outcomes. Furthermore, adolescents with more favorable attitudes toward AI showed greater acceptance of both types of AI decisions. By integrating utilitarian and deontological perspectives, this study offers insights into adolescent moral cognition in the age of intelligent technologies and underscores the need for ethically aligned AI system design.
KEYWORDS
Artificial Intelligence, Moral Dilemma, Utilitarianism, DeontologyCITE THIS PAPER
Zhang Yiling, Adolescents' Acceptance of Artificial Intelligence in Moral Dilemmas. Applied & Educational Psychology (2025) Vol. 6: 41-45. DOI: http://dx.doi.org/10.23977/appep.2025.060306.
REFERENCES
[1] Zacharus Gudmunsen.(2024).The moral decision machine: a challenge for artificial moral agency based on moral deference.AI and Ethics,5(2),1-13.
[2] Gordon, E. C., Cheung, K., et al. (2024). "Moral enhancement and cheapened achievement: Psychedelics, virtual reality and AI." Bioethics 39 (3): 276-287.
[3] Jiang, Y., Zhang, W., Wan, Y., Gummerum, M., & Zhu, L. (2024). Adolescents are more utilitarian than adults in group moral decision-making. Psych Journal, 14(2), 179 190. https://doi.org/10.1002/pchj.821
[4] Caprara, G. V., Alessandri, G., & Bornstein, M. H. (2021). Individual and environmental correlates of adolescents' moral decision making in moral dilemmas. Frontiers in Psychology, 12:8651977. https://doi.org/10. 3389/fpsyg.2021. 8651977
[5] Bigman, Y. E., Gray, K., & Malle, B. F. (2022). Artificial intelligence and moral dilemmas: Perception of ethical decision making in AI. Journal of Experimental Social Psychology, 99, 104270. https://doi.org/10.1016/j.jesp. 2022. 104270
[6] Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology & Marketing, 38(7)
[7] Luna Tacci, A., Manzi, F., Di Dio, C., Marchetti, A., Riva, G., & Massaro, D. (2023). Moral Context Matters: A study of Adolescents' Moral Judgment towards Robots.Conference proceedings / book chapter.
[8] Bentahila, L., & Fontaine, R. (2021). Universality and Cultural Diversity in Moral Reasoning and Judgment. Frontiers in Psychology, 12:764360. https://doi.org/10.3389/fpsyg.2021.764360
[9] Zhang, Z., Chen, Z., et al. (2022). "Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI." Journal of Experimental Social Psychology 101 : 104327.
[10] Ismatullaev Ulugbek Vahobjon Ugli & Kim SangHo.(2022).Review of the Factors Affecting Acceptance of AI-Infused Systems. Human factors,66(1),187208211064707-187208211064707. Zhang, Y., Wu, J., Yu, F., & Xu, L. (2023). Moral judgments of human vs. AI agents in moral dilemmas. Behavioral Sciences, 13(2), 181. https: //doi. org/10. 3390/bs13020181
[11] Irfan Ali, Nosheen Fatima Warraich & Khadijah Butt.(2025).Acceptance and use of artificial intelligence and AI-based applications in education: A meta-analysis and future direction. Information Development,41(3),859-874.
[12] Jiang, P., Niu, W., Wang, Q., Yuan, R., & Chen, K. (2024). Understanding users' acceptance of artificial intelligence applications: A literature review. Behavioral Sciences, 14(8), Article 671. https://doi.org/10.3390/bs14080671
[13] Laakasuo Michael.(2023).Moral Uncanny Valley revisited – how human expectations of robot morality based on robot appearance moderate the perceived morality of robot decisions in high conflict moral dilemmas .Frontiers in Psychology,14.
[14] Zhang, Y. W. J. Y. F. &. X. L. (2023). Moral judgments of human vs. AI agents in moral dilemmas. Behavioral Sciences, 13(2), 181.
[15] Cian, C. L. (2020). "Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The "Word-of-Machine" Effect." Journal of Marketing 86 (1): 91-108.
[16] Lydia Cao,Sara Hennessy & Rupert Wegerif.(2025).Thinking fast, slow, and ahead: Enhancing in-service teacher contingent responsiveness in science discussion with mixed-reality simulation. Computers & Education,237,105389-105389.
[17] Sedikides,C.,Gaertner,L.,& Toguchi,Y. (2003). Pancultural self enhancement. Journal of Personality and Social Psychology, 84(1), 60–79. https://doi.org/10.1037/0022 3514.84.1.60
[18] Caro-Burnett, J. and Kaneko, S. (2022). "Is Society Ready for AI Ethical Decision Making? Lessons from a Study on Autonomous Cars." Journal of Behavioral and Experimental Economics 98 : 101881.
Downloads: | 15431 |
---|---|
Visits: | 515040 |