Latest Updates

AI & Healthcare Ethics: Unpacking medical accountability through Islamic bioethics – an exclusive interview with a leading scholar

6 months ago
AI & Healthcare Ethics: Unpacking medical accountability through Islamic bioethics – an exclusive interview with a leading scholar

Artificial intelligence (AI) is rapidly transforming healthcare by enhancing diagnosis accuracy, optimizing treatment plans, and streamlining hospital management. However, as AI tech becomes increasingly integrated into medical practice, ethical concerns surrounding accountability and the physician-patient relationship are emerging. In an exclusive interview with Editor of The Muslim News, Ahmed J Versi, at WISH summit in Doha, Professor Mohammed Ghaly, a prominent scholar in Islamic bioethics and the Head of the Research Centre for Islamic Legislation & Ethics (CILE) at Hamad Bin Khalifa University, Qatar, explores the implications of AI in healthcare from an Islamic ethical standpoint, addressing both the potential benefits and challenges these innovations present in medical practice.

 

AI in Healthcare: Benefits & Challenges

 

Professor Ghaly began by acknowledging the growing presence of AI across various sectors, healthcare included. He observed, “AI now is getting into all fields in our lives, from entertainment and cinema videography to warfare. Healthcare makes no exception.” AI is revolutionizing key areas such as hospital management, drug selection, and treatment planning, promising to enhance efficiency, safety, and precision in medical practices.

“There are many benefits,” Ghaly emphasized. “For instance, AI can minimize waiting times, increase accuracy, and save physicians time by automating routine tasks, allowing them to focus more on interacting with patients. Some writers have even argued that AI could help make the art of medicine more humane again, by enhancing the human aspect of healthcare.”

Despite these advantages, Ghaly cautioned against overlooking the ethical challenges, particularly when it comes to accountability. “Historically, the physician-patient relationship has been central to medical ethics. Physicians have always held control over the tools they use,” he explained. “However, with AI systems that can outperform humans in many aspects—being ‘more intelligent, more efficient, more precise, cheaper, safer’—a new challenge arises: Who is held responsible when things go wrong?”

 

AI’s Black Box: Navigating Shifting Medical Accountability

 

Professor Ghaly highlighted a critical issue in AI systems: the “black box.” In contrast to traditional medical tools, where the physician has direct control and understanding of the decision-making process, AI often operates in a manner that is opaque to humans. “We have input and output, but we do not know what happens in between. Why did the machine say this X-ray shows

cancer when, to the human eye, it doesn’t appear so?” Ghaly questioned.
This lack of transparency poses a dilemma when errors occur. For instance, if AI misdiagnoses a condition, who is held accountable? “The physician could argue, ‘I didn’t make the decision; the machine did,’” Ghaly explained. “But the machine has no accountability—it’s not like we can suspend the AI for two months as a disciplinary measure.”

In Islamic ethics, accountability rests solely with humans. “Machines, plants, and animals cannot bear moral responsibility.” He further explained that even though AI may make decisions autonomously, it is still the result of human involvement—whether through programming, data selection, or training. Thus, responsibility may shift from the physician to a collective group of individuals involved in the creation and implementation of AI, such as data scientists or programmers.

This shift in accountability brings complexity, particularly in the context of medical malpractice. Ghaly foresees a scenario where legal responsibility no longer falls exclusively on healthcare professionals. “We may see medical malpractice cases where the responsible party is not a doctor but a data scientist or an AI programmer,” he predicted.

He drew a parallel with broader organizational principles of leadership. “Like anything else, the management is always responsible because they are the boss. Even if they don’t make the mistakes themselves, they are accountable for what happens,” he explained. In the context of AI, the developers responsible for creating and overseeing these systems would be held accountable when something goes wrong.

 

AI in Patient Care: The Ethical Concerns

One of the central ethical concerns raised by Ghaly is the replacement of human interaction with AI, particularly in sensitive areas of patient care. He acknowledged that AI could assist in treating patients, especially those with complex needs, such as dementia or Alzheimer’s. Socially assistive robots, for example, could help patients remember medication times and provide consistent care. However, Ghaly raised an important point: “We need human interaction. Despite our disagreements or frustrations with each other, humans still crave meaningful connection.”

He warned against replacing the “human interaction” that comes with caregivers with machines. “We missed each other during COVID-19 because we need human interaction,” Ghaly said, underscoring the irreplaceable emotional and social dimensions of healthcare.

 

AI in Medicine: Islamic Ethical Perspectives

From an Islamic perspective, Professor Ghaly emphasized that the ethical challenges posed by AI in healthcare must be understood within the framework of Islamic values. “In the Muslim-majority world, we view things through the lens of our religious value system,” he explained. This framework prioritizes the belief that humans are stewards of their bodies, which ultimately belong to God. As such, regardless of AI’s involvement, certain procedures, including euthanasia, are prohibited in Islam. “Even if the patient and physician consent to euthanasia, the religious framework does not allow it because it is forbidden by God,” Ghaly clarified.
Additionally, Ghaly pointed out that practices such as in vitro fertilization (IVF) also have religious implications. While IVF may be permissible in secular contexts, Islam imposes conditions, such as the requirement for the procedure to take place within marriage and the prohibition of third-party involvement, such as sperm or egg donations.

Professor Ghaly’s insights illustrate the nuanced ethical challenges posed by integrating AI into healthcare, particularly in Muslim-majority societies where religious values significantly shape ethical decision-making. While AI offers numerous benefits, such as improved efficiency and precision, its potential to disrupt traditional accountability models and human interaction in healthcare requires careful consideration. As AI continues to evolve, these discussions will be critical in shaping the future of medical practice.

Photo: Prof Mohammed Ghaly, Head of the Research Centre for Islamic Legislation and Ethics (CILE) at Hamad Bin Khalifa University, Qatar, (R) in conversation with The Muslim News Editor, Ahmed J Versi.

Ahmed J Versi,
Editor, The Muslim News

 

READ MORE

QATAR WISH SUMMIT
Escalating attacks on healthcare in conflict zones

Food is a Human Right: UN expert condemns starvation as a War Crime

 

 


 

View Printed Edition