This site is intended for Healthcare Professionals only.

WHO warns against bias, misinformation in using AI in healthcare

Date:

Share post:

The World Health Organization (WHO) called for caution on Tuesday (May 16) in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was “imperative” to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Current Issue March 2024

Related articles

Boots supports community pharmacists become Mental Health First Aiders

PDA encourages representatives at Boots to undertake Mental Health First Aid (MHFA) training Pharmacists, who are working on the...

Surge in stroke cases could cost UK £75bn by 2035, charity warns

By 2035, there will be 151,000 hospital admissions due to stroke every year, averaging 414 admissions per day...

NHS and i.AI forge historic collaboration to boost healthcare

AI assisting NHS to half treatment times for stroke patients and overall patient care experience The Department of Health...

NHS to cut the red tape to support 50K NHS postgraduate doctors

New measures are part of NHS' broader efforts to retain its skilled workforce and ensure high-quality patient care  In...