This site is intended for Healthcare Professionals only.

WHO warns against bias, misinformation in using AI in healthcare

Date:

Share post:

The World Health Organization (WHO) called for caution on Tuesday (May 16) in using artificial intelligence for public healthcare, saying data used by AI to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was “imperative” to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Its cautionary note comes as artificial intelligence applications are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

GPs in deprived areas now responsible for nearly 2,500 patients each – RCGP study reveals

Pharmacies have huge potential to take pressure off GPs if they are given adequate funding, says NPA CEO...

Social responsibility shifts into pole position in ESG

Tarina Shah and Jayasri Prasad delve into the critical role of social responsibility in community pharmacy businesses The...

Boots and PDA Union reach agreement in pay talks

All eligible pharmacists and pharmacist store managers at Boots will receive a 4% pay increase Boots and the...

Pharmacies at the heart of Labour’s healthcare reform – Stephen Kinnock

The government is looking to address the financial strain faced by pharmacies as “a matter of urgency” Pharmacy...