Categories: Tech

Medical professionals utilizing AI to judge narcotics prescriptions: report

Health agencies and law enforcement are turning to artificial intelligence (AI) in their efforts to combat widespread opioid addiction, according to a report.

Data-driven monitoring systems such as NarxCare offer numerical ratings of patients’ medication history that give doctors a rudimentary idea of their risks, but professionals are split on their effectiveness, according to a report from MarketPlace. 

“We need to see what’s going on to make sure we’re not doing more harm than good,” health economist Jason Gibbons told the outlet. 

ASK A DOC: 25 BURNING QUESTIONS ABOUT AI AND HEALTH CARE ANSWERED BY AN EXPERT

An arrangement of pills of the opioid oxycodone-acetaminophen, also known as Percocet, is shown. Tech firms have begun offering addiction warning systems operated through artificial intelligence. (Associated Press)

He added, “We’re concerned that it’s not working as intended, and it’s harming patients.”

Algorithmic evaluations of individual patients are being produced by AI models to help professionals determine their addiction risks.

The scores are drawn from multiple data points, including number of prescriptions, dosage information and the doctors who have prescribed for the patient previously. The ratings are not intended to make the final decision on patients’ care and tech firms urge doctors to use their own judgment alongside the technology.

CHATGPT, MEAL PLANNING AND FOOD ALLERGIES: STUDY MEASURED ‘ROBO DIET’ SAFETY AS EXPERTS SOUND WARNINGS

As the artificial intelligence train barrels on with no signs of slowing down — some studies have even predicted that AI will grow by more than 37% per year between now and 2030 — the World Health Organization (WHO) has issued an advisory calling for “safe and ethical AI for health.”

The World Health Organization logo is seen near its headquarters in Geneva.  (REUTERS/Denis Balibouse/File Photo)

The agency recommended caution when using “AI-generated large language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and preserve public health.”

While WHO acknowledges “significant excitement” about the potential to use these chatbots and algorithms for health-related needs, the organization underscores the need to weigh the risks carefully.

CLICK HERE TO GET THE FOX NEWS APP

“This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation,” it said.

Fox News Digital’s Melissa Rudy contributed to this report.

Share

Recent Posts

Trump’s 16th week in office to include WH meeting with Canada, ongoing trade negotiations

President Donald Trump is fresh off his 100th day in office and says his administration…

2 hours ago

Rep. Marjorie Taylor Greene airs frustrations, warns that she represents a ‘not happy’ Republican base

Rep. Marjorie Taylor Greene, R-Ga., expressed her frustrations on a variety of political topics on…

2 hours ago

GOP rep urges lawmakers to ‘right-size’ bloated bureaucracy, national debt: ‘Wheels are coming off the wagon’

Bloated bureaucracy and growing debt are holding back President Donald Trump’s economic "golden age," according…

4 hours ago

Is there a New England serial killer? Former FBI agent with regional ties reveals his theory

close Video Mauro weighs in on New England serial killer theory Fox News contributor and…

6 hours ago

Father whose son died from fentanyl warns overdoses ‘can happen to anyone’ as states fight deadly crisis

close Video Grieving father shares story of his son's fatal drug overdose Steve Muth speaks…

6 hours ago

Former Vice President Mike Pence honored by Kennedy family in receiving the JFK ‘Profile in Courage Award’

BOSTON, Mass. – Former Vice President Mike Pence was honored on Sunday night for his…

8 hours ago