Healthcare experts and doctors from various countries have raised concerns about the potential dangers of artificial intelligence (AI) and its impact on public health. While AI holds promise in transforming healthcare with improved disease diagnosis, better treatment methods, and increased access to care, its development must be regulated to prevent negative consequences.


In an article published in BMJ Global Health, health professionals from the UK, US, Australia, Costa Rica, and Malaysia highlighted the risks associated with AI in medicine and healthcare. These risks include the possibility of AI errors causing harm to patients, concerns about data privacy and security, and the exacerbation of social and health inequalities.


An example of the potential harm was cited, where an AI-driven pulse oximeter inaccurately estimated blood oxygen levels in individuals with darker skin, leading to undertreatment of hypoxia in these patients.


ALSO READ: Google Looking To Introduce AI Chats, Short Video Posts In Search


The experts also emphasized the broader threats posed by AI to human health and even existence. They raised concerns about AI's potential for control and manipulation of people, the use of lethal autonomous weapons, and the mental health effects of widespread unemployment resulting from AI systems replacing human workers. Additionally, they highlighted the role of AI-driven information systems in undermining democracy, eroding trust, and fostering social division and conflict, with subsequent implications for public health.


The loss of jobs due to the widespread implementation of AI technology was identified as another threat. Estimates suggest that tens to hundreds of millions of jobs could be affected in the next decade. While the elimination of repetitive, dangerous, and unpleasant work may have benefits, unemployment is strongly associated with adverse health outcomes and behaviors. The psychological and emotional impact of a world where work is scarce or unnecessary needs careful consideration, alongside the development of policies to mitigate the association between unemployment and ill health.


The experts further expressed concerns about the potential dangers of self-improving artificial general intelligence, which could surpass human capabilities and pose risks to human well-being. They stressed the urgent need for effective regulation to avoid harm and called for a moratorium on the development of self-improving artificial general intelligence until appropriate regulations are in place.


ALSO READ: IBM To Stop Hiring For Jobs That AI Could Replace, Says CEO Arvind Krishna


In the UK, a coalition comprising health experts, independent fact-checkers, and medical charities urged the government to take action against health misinformation in the forthcoming online safety bill. They emphasized the importance of clear policies and consistent approaches by internet companies to identify and address harmful health misinformation on their platforms. The coalition requested the inclusion of a legally binding duty in the bill to ensure that major social networks implement rules governing the moderation of health-related misinformation.


The chief executive of Full Fact, a fact-checking organization, emphasized the significance of this amendment, stating that without it, the online safety bill would be ineffective in combating harmful health misinformation.