Education, Science, Technology, Innovation and Life
Open Access
Sign In

Ethical Implications of AI in Autonomous Systems: Balancing Innovation and Responsibility

Download as PDF

DOI: 10.23977/jaip.2024.070313 | Downloads: 32 | Views: 1265

Author(s)

Yuanxi Xu 1, Yuhe Zhu 2

Affiliation(s)

1 High School Affiliated to Northwest Normal University, 21 Shilidian South Street, Anning District, Lanzhou City, Gansu Province, 730030, China
2 Forsyth Country Day School, 5501 Shallowford Road, Lewisville, North Carolina, 27023, United States

Corresponding Author

Yuanxi Xu

ABSTRACT

This study integrates insights from systems engineering, ethics, and law to create a unified framework for addressing the complex challenge of ensuring the safety of autonomous systems. The emphasis is on identifying the "gaps" that emerge throughout the development process: the semantic gap, where there is an absence of standard criteria for fully specifying intended functionalities; the responsibility gap, where typical conditions for attributing moral responsibility to human agents for potential harm are missing; and the liability gap, where the usual mechanisms for providing compensation to those affected by harm are inadequate. By categorizing these "gaps," we can more accurately identify critical sources of uncertainty and risk in autonomous systems, which can guide the creation of more comprehensive safety assurance models and enhance risk management strategies.

KEYWORDS

Autonomous Systems, Safety Assurance, Semantic Gap, Responsibility Gap, Risk Management

CITE THIS PAPER

Yuanxi Xu, Yuhe Zhu, Ethical Implications of AI in Autonomous Systems: Balancing Innovation and Responsibility. Journal of Artificial Intelligence Practice (2024) Vol. 7: 107-111. DOI: http://dx.doi.org/10.23977/jaip.2024.070313.

REFERENCES

[1] Lin, P., & Abney, K. Robot Ethics: The Ethical and Social Implications of Robotics. In The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, 2017, pp. 281-307.
[2] Binns, R., Veale, M., Shadbolt, N., & Shadbolt, N. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, pp. 1-14.
[3] Calo, R. The Case for a Federal Robotics Commission. Harvard Journal of Law & Technology, 2016, 29(2), 429-466.
[4] Crawford, K., & Paglen, T. Excavating AI: The Politics of Images in Machine Learning Training Data. AI & Society, 2019, 34(1), 135-150.
[5] Dastin, J. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. Reuters. 2018. Retrieved from https://www.reuters.com/article/us-amazon-com-recruitment-insight-idUSKCN1MK08G
[6] Gogoll, J., & Müller, J. F. Legally and Ethically Admissible AI: Challenges and Solutions. Artificial Intelligence Review, 2017, 50(3), 227-249.
[7] Heath, J. Ethical Implications of AI in Autonomous Vehicles: Balancing Innovation with Responsibility. Journal of Business Ethics, 2019, 160(2), 343-356.
[8] Kumar, P., & Singh, P. Machine Learning in Autonomous Systems: Ethics and Regulation. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(6), 2110-2122.
[9] Lin, P., & Bekey, G. A. The Ethics of Autonomous Cars. The Atlantic. 2012. Retrieved from https://www. theatlantic. com/technology/archive/2012/10/the-ethics-of-autonomous-cars/263786/
[10] Raji, I. D., & Buolamwini, J. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.  2019, pp. 1-15. 

Downloads: 15127
Visits: 485206

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.