Like an airline pilot who hardly ever crashes, or a parachute manufacturer that guarantees almost all of its work, a doctor who provides good medical advice most of the time would instill more fear than confidence.
But as artificial intelligence chatbots grow in popularity, that might be just what some consumers are getting — mostly accurate information.
A study by University of Florida Health researchers examined the popular AI chatbot ChatGPT to see how it fared giving advice on common medical questions posed to urologists.
The researchers, urologists themselves, found that the chatbot appropriately answered 39 queries about 60% of the time. Otherwise, the study found, the chatbot misinterpreted clinical care guidelines, missed important contextual information, concealed its sources of information or gave faulty references.
ChatGPT developers do warn users the AI engine is a work in progress and that the programs are not intended to give advice. Still, scientists worry that some consumers will use them in place of a doctor.
The study noted that chatbots provide answers with absolute certainty, even when spouting incorrect information. That lulls us into a false sense of security.
While the Florida researchers encourage patients to explore medical information outside the doctor’s office, they say it is always a good idea to verify what one is told with a medical professional.
Of course, AI chatbots like ChatGPT and others are getting better. They are, indeed, learning. But the white coat doesn’t fit just yet.