Ethical Risks in Medical Artificial Intelligence and the Normative Function of Medical Humanities: A Study of AI and Emerging Medical Technologies

Authors

  • Liwei Liu Shenzhen Open University Author

DOI:

https://doi.org/10.71204/sprd8n54

Keywords:

Medical Artificial Intelligence, Ethical Risk, Medical Humanities, Normative Framework, Emerging Medical Technologies

Abstract

The rapid expansion of artificial intelligence and emerging digital technologies in medicine has fundamentally reshaped clinical decision-making, healthcare governance, and biomedical knowledge production. While medical artificial intelligence promises enhanced efficiency, diagnostic accuracy, and system optimization, it simultaneously generates complex ethical risks that challenge traditional medical norms and regulatory approaches. Existing discussions of medical AI ethics often prioritize technical safeguards, algorithmic transparency, or regulatory compliance, yet they frequently underestimate the need for deeper normative reflection on responsibility, moral agency, and the meaning of care. This paper argues that medical humanities plays an indispensable normative role in identifying, interpreting, and addressing the ethical risks embedded in medical AI applications. Focusing explicitly on artificial intelligence and frontier medical technologies, the study analyzes the structural sources of ethical risk in algorithm-driven medicine and examines how medical humanities contributes to ethical norm construction beyond procedural governance. By situating medical AI within humanistic concerns such as moral responsibility, interpretive judgment, and human dignity, the paper demonstrates that medical humanities is essential for ethically robust and socially legitimate AI-enabled healthcare.

References

Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (8th ed.). Oxford University Press.

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.

Charon, R. (2006). Narrative medicine: Honoring the stories of illness. Oxford University Press.

Coeckelbergh, M. (2020). AI ethics. MIT Press.

Daniels, N. (2008). Just health: Meeting health needs fairly. Cambridge University Press.

Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.

Evans, M., Ahlzén, R., Heath, I., & Macnaughton, J. (2016). Medical humanities. In E. J. Cassell & T. J. Buchanan (Eds.), The Oxford handbook of medical ethics and law (pp. 463–482). Oxford University Press.

Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.

Kleinman, A. (1988). The illness narratives: Suffering, healing, and the human condition. Basic Books.

London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21.

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

Montgomery, K. (2006). How doctors think: Clinical judgment and the practice of medicine. Oxford University Press.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.

Pellegrino, E. D., & Thomasma, D. C. (1993). The virtues in medical practice. Oxford University Press.

Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347–1358.

Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.

Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What this computer needs is a physician. JAMA, 319(1), 19–20.

Downloads

Published

2025-12-31

How to Cite

Ethical Risks in Medical Artificial Intelligence and the Normative Function of Medical Humanities: A Study of AI and Emerging Medical Technologies. (2025). Life Studies, 1(4), 32-43. https://doi.org/10.71204/sprd8n54

Similar Articles

11-20 of 22

You may also start an advanced similarity search for this article.