Bioethicist Warns About AI and Medical Care

November 22, 2023 at 11:18 a.m.



 

Artificial intelligence (AI) is increasingly being used in medicine to improve diagnosis and treatment of diseases. It is also being routinely used to avoid unnecessary screening for patients.

Determining whether a person is diabetic could be as easy as having them speak a few sentences into their smartphone, according to a new study from Klick Labs that combines voice technology with AI in a major step forward in diabetes detection. The study, which is published in Mayo Clinic Proceedings: Digital Health, outlines how scientists used six to ten seconds of a person’s voice, along with basic health data, including age, sex, height, and weight to create an AI model that can distinguish whether that individual has type-2 diabetes. The model was found to have an 89% accuracy for women and 86% for men.

For the study, Klick Labs researchers asked 267 people (diagnosed as either non- or type-2 diabetic) to record a phrase into their smartphone six times daily for two weeks. From more than 18,000 recordings, scientists analyzed 14 acoustic features for differences between non-diabetic and type-2 diabetic individuals.

“Our research highlights significant vocal variations between individuals with and without type-2 diabetes and could transform how the medical community screens for diabetes,” said study investigator Jaycee Kaufman, who is a research scientist at Klick Labs. “Current methods of detection can require a lot of time, travel, and cost. Voice technology has the potential to remove these barriers entirely.”

The team at Klick Labs looked at a number of vocal features, like changes in pitch and intensity that can’t be perceived by the human ear. Using signal processing, scientists were able to detect changes in the voice caused by type-2 diabetes. Surprisingly, those vocal changes manifested in different ways for males and females, said Kaufman.


No Protections in Place for AI

While AI in medicine will undoubtedly improve diagnosis and treatment, some AI medical devices also have the potential to harm patients and worsen health inequities if they are not designed, tested, and used with care, according to an international AI task force. Jonathan Herington, who is a member of the AI task force, laid out recommendations on how to ethically develop and use AI medical devices in two papers published in the Journal of Nuclear Medicine. The task force called for increased transparency about the accuracy and limits of AI and outlined ways to ensure all individuals have access to AI medical devices that work for them, regardless of their race, ethnicity, gender, or wealth.

While the burden of proper design and testing falls to AI developers, healthcare providers are ultimately responsible for properly using AI and shouldn't rely too heavily on AI predictions when making patient care decisions. "There should always be a human in the loop," said Jonathan Herington, who is assistant professor of Health Humanities and Bioethics at the University of Rochester Medical Center (URMC) in New York. "Clinicians should use AI as an input into their own decision-making, rather than replacing their decision-making."

This now requires that doctors truly understand how a given AI medical device is intended to be used, how well it performs at that task, and any limitations. Physicians must weigh the relative risks of false positives versus false negatives for a given situation, all while taking structural inequities into account. When using an AI system to identify probable tumors in PET scans, healthcare providers must know how well the system performs at identifying this specific type of tumor in patients of the same sex, race, ethnicity, etc.

"What that means for the developers of these systems is that they need to be very transparent," said Herington. According to the task force, it's up to the AI developers to make accurate information about their medical device's intended use, clinical performance, and limitations readily available to users. One way they recommend doing that is to build alerts right into the device or system that informs users about the degree of uncertainty of the AI's predictions.

It's not enough to simply validate algorithms used by a device or system. AI medical devices should be tested in so-called "silent trials," meaning their performance would be evaluated by researchers on real patients in real-time, but their predictions would not be available to the healthcare provider or applied to clinical decision-making.

"A concern is that these high-tech, expensive systems would be deployed in really high-resource hospitals, and improve outcomes for relatively well-advantaged patients," said Herington. "Patients in under-resourced or rural hospitals wouldn't have access to them or would have access to systems that make their care worse because they weren't designed for them," said Herington.

Currently, AI medical devices are being trained on datasets in which Latino and Black patients are underrepresented, meaning the devices are less likely to make accurate predictions for patients from these groups. In order to avoid deepening health inequities, developers must ensure their AI models are calibrated for all racial and gender groups by training them with datasets that represent all of the populations the medical device or system will ultimately serve, according to the task force.

Though these recommendations were developed with a focus on nuclear medicine and medical imaging, Herington believes they can and should be applied to AI medical devices broadly. "The systems are becoming ever more powerful all the time and the landscape is shifting really quickly," said Herington. "We have a rapidly closing window to solidify our ethical and regulatory framework around these things."


AI Changing Surgical Procedures and Health Screenings

AI is not just a futuristic concept, but a present-day game-changer in surgical care. “Artificial intelligence is poised to transform surgery in the same way that the use of anesthesia, the discovery of antibiotics, and the introduction of minimally invasive surgery have altered surgical care,” said Dr. Danielle S. Walsh, who is a professor and vice chair of surgery for quality and process improvement at the University of Kentucky, Lexington, Kentucky.

One tangible example of how AI technology can make surgical procedures safer is the removal of a gallbladder. The gallbladder has a branch, kind of like a branch on a tree. "There's one branch that goes over the gallbladder, and out of all the different branches, you have to find exactly the right one," said Dr. Walsh. “One of the mistakes that can happen in surgery is somebody cuts the wrong branch. And if you were a surgeon in the operating room about to cut the wrong one, you might get a red flashing signal warning you are about to cut the wrong structure."

AI also is being harnessed by some scientists to predict which molecules can best treat illnesses and to quickly screen existing medicines for new applications. Researchers reporting in ACS Central Science have used one such deep learning algorithm, and found that dihydroartemisinin (DHA), an antimalarial drug and derivative of a traditional Chinese medicine, can treat osteoporosis as well.


John Schieszer is an award-winning national journalist and radio and podcast broadcaster of The Medical Minute. He can be reached at medicalminutes@gmail.com.


Share this story!