Jubayer Hossain

Biomedical Researcher

Safe and Ethical AI: WHO Calls for Safe, Ethical use of AI Tools for Health Research


January 09, 2024

 Highlights 

  • The use of AI in health research has the potential to revolutionize data analysis, predictive modeling, and personalized medicine.
  • Ethical considerations are crucial in ensuring the responsible and safe use of AI in health research. This includes protecting patient privacy, obtaining informed consent, and addressing biases and fairness in AI algorithms.
  • Governance frameworks are needed to guide the development and deployment of AI in health research. These frameworks should involve multidisciplinary collaboration, transparency, and accountability.
  • The World Health Organization (WHO) emphasizes the importance of ethical and responsible AI use in health research. They advocate for the development of guidelines and standards to address ethical challenges and promote the equitable distribution of AI benefits.
  • Collaboration between researchers, policymakers, and stakeholders is essential to establish guidelines and regulations that promote the safe and ethical use of AI in health research 

Introduction

The World Health Organization (WHO) is urging caution in the use of large language model tools (LLMs), such as ChatGPT, Bard, and Bert, for health-related purposes. While recognizing their potential to support health needs, WHO emphasizes the need for careful examination of risks to protect human well-being, safety, autonomy, and public health.
The rapid adoption of LLMs for health information access, decision support, and diagnostics raises concerns about potential errors, harm to patients, and erosion of trust in AI. WHO stresses the importance of exercising caution and adhering to key values like transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.
Concerns include biased training data, potential misinformation, lack of consent for data use, and the risk of disseminating convincing disinformation. WHO recommends rigorous oversight, ethical principles, and clear evidence of benefits before the widespread use of LLMs in routine healthcare.
To address these issues, WHO proposes adherence to ethical principles and appropriate governance, as outlined in their guidance on the ethics and governance of AI for health. The core principles include protecting autonomy, promoting well-being and safety, ensuring transparency, fostering responsibility and accountability, ensuring inclusiveness and equity, and promoting responsive and sustainable AI.
This article explores WHO's comprehensive guidelines on the ethical use of AI in healthcare, emphasizing key principles to ensure the responsible and equitable integration of AI technologies.

Unveiling the Potential

As AI technologies continue to evolve, they offer unparalleled opportunities to augment healthcare providers' capabilities, enhance patient care, and optimize medical decision-making. The ability to provide accurate diagnoses, streamline treatment plans, and support pandemic preparedness underscores the transformative potential of AI in healthcare.

Ethical Concerns in the Age of AI

While the ethical concerns addressed in WHO's guidelines are not exclusive to AI, the advent of this technology introduces novel challenges. Balancing the benefits of AI with ethical considerations is crucial to avoid negative consequences that may arise if ethical principles and human rights obligations are not prioritized.

Empowering Healthcare Stakeholders

AI has the potential to empower healthcare providers by equipping them with valuable tools to improve patient care. However, for this potential to be realized, healthcare workers must receive comprehensive education and training on the safe and effective use of AI systems.

Empowering Patients and Communities

Beyond healthcare providers, AI can also empower patients and communities to take control of their healthcare journey. WHO emphasizes the importance of protecting patient rights and interests, ensuring that AI does not compromise human autonomy, and incorporating AI into health systems in a way that enhances, rather than displaces, human decision-making.

Extending Healthcare Access

In resource-poor countries with limited access to healthcare professionals, AI has the potential to bridge gaps and improve access to essential health services. However, careful consideration must be given to designing AI systems that reflect the diversity of socio-economic and healthcare settings, accompanied by appropriate training in digital skills and community engagement.

Avoiding Biases in AI

To ensure equitable provision of healthcare services, investments in AI and supporting infrastructure should actively avoid encoding biases that could undermine the accessibility and effectiveness of AI technologies, especially in low- and middle-income settings.

Key Ethical Principles

Protecting Human Autonomy

The first ethical principle of WHO emphasizes the importance of ensuring that AI does not undermine human autonomy. This involves safeguarding privacy, maintaining confidentiality, and obtaining valid informed consent, all within a robust legal framework for data protection.

Promoting Human Well-being and Safety

Prioritizing human well-being and safety, this principle calls for adherence to regulatory requirements to prevent harm. Quality control, ongoing evaluation, and a commitment to preventing mental or physical harm underscore the importance of responsible AI implementation.

Ensuring Transparency and Explainability

To build trust and facilitate meaningful public engagement, AI technologies must be transparent and explainable. This involves providing sufficient information for public scrutiny, ensuring that the technology is understandable to various stakeholders, and promoting open dialogue about AI's design and deployment.

Fostering Responsibility and Accountability

Stakeholders involved in AI development and deployment must uphold responsibilities and be held accountable for the outcomes. Implementing a "human warranty" through patient and clinician evaluation ensures that AI technologies meet the necessary criteria and are used under appropriate conditions.

Ensuring Inclusiveness and Equity

To prevent biases that could perpetuate existing disparities, AI technologies should be designed for the widest possible equitable use, irrespective of demographic characteristics. Vigilant monitoring and evaluation are essential to identify and rectify any disproportionate effects on specific groups.

Promoting AI that is Responsive and Sustainable

Continuous assessment and transparency in AI applications are vital for responsiveness. Furthermore, AI systems should align with global sustainability efforts, minimizing environmental impact and addressing potential disruptions in the workplace, including necessary training for healthcare workers.

Conclusion

The ethical guidelines of WHO provide a roadmap for the responsible integration of AI into healthcare, emphasizing the need for a collective effort to address ethical challenges. As we navigate the future of healthcare, adherence to these principles ensures that AI remains a powerful tool in advancing patient care, public health, and global well-being while safeguarding human autonomy, equity, and ethical values.