The World Health Organization (WHO) has recently published new guidance on the ethics and governance of large multi-modal models (LMMs) of Artificial Intelligence (AI), aiming to promote the appropriate use of AI and safeguard public health. The guidance, released on 18 January 2024, is directed at governments, technology companies, and healthcare providers.
According to the WHO, large multi-modal models of AI have extensive potential applications in the health sector, including scientific research, drug development, diagnosis, clinical care, medical and nursing education, and administrative tasks such as managing electronic medical records. Patients can also benefit from AI by accessing information on symptoms and treatment modalities.
Despite the promising applications, the WHO highlights potential risks associated with the use of AI, such as the generation of false, inaccurate, or incomplete data. This could have adverse effects on individuals making health-related decisions based on such information. Additionally, the WHO points out the potential presence of bias and distortion in AI-generated outputs, particularly when trained on poor quality data or influenced by bias.
In response to these concerns, the WHO has put forward a series of recommendations. Governments are urged to establish standards for the development and deployment of AI, while developers are advised to involve potential users and stakeholders during the design phase to enhance the capacity of health systems and prioritize patient interests.
This new guidance from the WHO underscores the growing importance of ethical considerations and governance in the rapidly evolving field of AI, particularly within the healthcare domain. As AI continues to advance, it is crucial for stakeholders to prioritize the responsible and ethical use of AI to ensure the well-being of individuals and the integrity of healthcare systems.