Anyone who has ever walked out of a doctor’s office feeling confused or overwhelmed will appreciate the work Professor Richard Zraick is doing to make patient education more accessible.
“Health literacy surveys have revealed that 1 in 3 adults in the U.S. has difficulty understanding basic health information,” Zraick says.
Zraick is a leading expert and advocate for health literacy in the discipline of communication sciences and disorders. In one of his research mentoring classes this fall, he and his students in the School of Communication Sciences and Disorders are exploring whether the artificial intelligence (AI) website ChatGPT can improve how healthcare information is created, conveyed and understood.
This is one of many projects at UCF that is unlocking the future of AI. Explore more UCF AI initiatives here.
The group is gathering existing materials from key websites that address common medical conditions faced by patients with communications disorders — information ranging from hearing loss to swallowing challenges to voice disorders. The content of this material is then entered into ChatGPT using prompts — text-based input, such as a question or instruction. Students develop and refine prompts that seek to simplify the language and apply readability formulas to assess if the new text is more readable — or not.
Upon completion of the project, the team will have evaluated and revised about three dozen web-based patient education documents.
They plan to submit their findings to a peer-reviewed journal next semester. The goal is to better understand if AI can be an effective tool for simplifying complex medical information and improving health literacy and patient education. Tools like ChatGPT, a large language model developed by OpenAI, may offer healthcare professionals new ways to efficiently deliver patient education materials. The technology may not only streamline the process but also enhance the clarity of information delivered to patients, improving understanding of their own health conditions and treatment plans and ultimately leading to better health outcomes.
One of the most promising applications of ChatGPT in patient education is its ability to simplify medical jargon and make health information more accessible to the public, Zraick says.
“If you give AI, for example ChatGPT, the correct prompt, it will either create, edit or suggest revisions to an existing document that you might be trying to create for a patient, or that exists for a patient or their family.”
This ability could allow healthcare professionals to quickly produce patient-friendly materials that meet readability standards recommended by institutions like the Institute of Medicine, which suggests health information be written at a fifth or sixth grade reading level.
AI also has clear advantages in terms of speed. “You can ask ChatGPT to give you a script…that somebody with limited health literacy could understand, and it will do that in 20 seconds,” Zraick says.
He emphasizes that while AI serves as a helpful starting point, students still need to ensure the information is accurate and apply it in their interactions. While AI tools like ChatGPT offer efficiency, they are not flawless and Zraick emphasizes that the role of the content expert remains crucial.
“We are the content experts, so I never trust ChatGPT 100%, but it’s a starting point. And then I look at it and review it for content,” he says.
Human review ensures that the information is not only accurate, but also contextually appropriate for the intended audience.
Beyond simplifying language, AI can assist in evaluating the readability of existing documents.
“There’s no one readability formula that captures all kinds of documents. We usually use more than one formula to get a variety of metrics, and they tend to agree with each other,” Zraick says.
Materials that are easier to understand are also more actionable, increasing the likelihood that patients will follow through with medical advice.
For Kelly Clevenger, a School of Communication Sciences and Disorders senior, the project gave her an opportunity for a deeper dive into AI, something she had only used superficially for things like checking grammar for class assignments. As part of the project, she attended a “prompt engineering” workshop designed to fine tune her ability to leverage ChatGPT’s functionality.
“People kind of think it just runs itself, and I think something that people should realize is that you really need to have a good idea of exactly how you want it to work before you even start prompting,” Clevenger says. “If you don’t give it specific enough direction, it won’t give you exactly what you want.”
She notes that while the tool isn’t perfect, it significantly cuts down the time required, enabling researchers to focus on higher-level analysis and interpretation.
This is not Zraick’s first foray into the world of exploring the use of AI in health communications. This year, he and colleagues published an article in a journal of American Speech-Language-Hearing Association examining the use of ChatGPT as a tool to teach students in communication sciences and disorders how to write in plain language. The researchers believe that AI tools hold promise; the tech can enhance students’ abilities as well as offer an interactive environment that encouraged active participation and critical thinking.
AI adds a new element for Zraick, who, for decades, has taught students about health literacy. Some of his courses include class assignments that have students complete written assignments describing medical concepts using plain language and participate in role playing exercises that stress clear communication. His ongoing research is assessing how effective this work is and whether teaching new graduate clinicians to use plain language will enhance the clarity and actionability of their patient reports.
Students will one day serve diverse audiences as speech-language pathologists and audiologists, and there’s a difference between writing for professionals and for patients, Zraick says.
“If a student is writing a report or a treatment update for another healthcare provider, it’s a technical writing exercise,” he says. “But for patients, they need a plain language summary.”
Clevenger also underscores the challenges of using AI in research.
“You can’t just throw any dataset at it and expect good results,” she says. “We’ve been working on refining the prompts we use to get better, more accurate outputs from the model. It’s a learning process, but the more we work with it, the better it gets.”
As the use of AI in healthcare continues to expand, the focus will likely shift toward refining these tools to ensure even greater accuracy and relevance, Zraick says.
“Clinicians and educators have more tools to fine-tune skills and expand the skill set of a speech-language pathologist, or an audiologist, beyond the core content knowledge that they have to have,” he says. “It’s like practicing for the 22nd century, not just the 21st century.”