AI in Medicine: Convenience or Compromise?
As artificial intelligence becomes more ingrained into everyday life, the line between helpful tool and risky shortcut is becoming increasingly blurred.
OpenAI recently announced the release of ChatGPT Health, which is a new feature on the AI chatbot that asks users to upload their medical records and connect health apps. The announcement quickly brought the ethics of using AI in medical contexts to center stage.
Designed to help users understand and organize their personal health information, OpenAI says that ChatGPT Health will generate more personalized responses to medical questions, which the company says account for more than 5 percent of all messages from users on the ChatGPT platform.
Illinois Tech Professor of Philosophy Elisabeth Hildt, the director of the university’s Center for the Study of Ethics in the Professions, is skeptical about ChatGPT Health. She says there are fundamental questions yet to be answered about ChatGPT Health’s accountability, reliability, and privacy.
“It’s not a medical tool; it doesn’t align with medical standards,” says Hildt, noting that there are no doctors, institutions, or companies clearly responsible for any consequences of using ChatGPT Health. “There’s no one to be made accountable.”
AI outputs are only as good as the data provided, so Hildt warns that there is always the risk that responses may simply be wrong—or even worse, they could be hallucinated based on incomplete or biased information. That uncertainty could lead users to make health decisions based on misleading outputs rather than guidance from qualified medical professionals.
Privacy concerns also loom large. While OpenAI claims that “conversations in [ChatGPT] Health are not used to train our foundation models,” Hildt questions how it can function without relying on large volumes of data.
“They have to train the model somehow,” she says. “In order to train the model, they need a lot of data. Where’s the data coming from?”
Even with enhanced safeguards, Hildt says questions remain about how health data may be stored, used, or potentially exposed in the event of a breach.
While she acknowledges the appeal of AI tools—particularly in physical areas with long wait times or limited access to doctors—Hildt cautions users against viewing ChatGPT Health as a reliable or safe substitute for medical care.
“I wouldn’t go as far to say, ‘Oh, never, never, ever trust,’” says Hildt. “But on the other hand, I would doubt whether this is really a useful alternative in the long run.”