Key takeaways:
- In-context learning helped a large language model achieve accurate glaucoma diagnosis.
- This method could aid in glaucoma detection without conventional training.
DENVER — Using visual prompts helped improve glaucoma detection by a large language model, according to a poster presentation at the Association for Research in Vision and Ophthalmology meeting.
Iris Fang-Yu Hu, MD, and colleagues wanted to find out if giving a general-purpose large language model visual examples could help it detect glaucoma in fundus photos. They compared this “in-context learning” in two large language models with prompts without any reference images, according to the study.
In-context learning helped a large language model achieve accurate glaucoma diagnosis.
“A large language model is actually quite a powerful tool for glaucoma detection,” Hu told Healio. “When simply prompting the model, it might not be sufficient, but by using in-context learning, which is just providing a reference image, it can actually largely improve the diagnostic performance and also the confidence estimation, which is quite important for clinical safety.”
The researchers found that the in-context model performed better in all of their diagnostic metrics vs. the non-in-context prompting. Additionally, they found the model came close to matching the performance of a “supervised fine-tuning” model that was specifically trained on the glaucoma image dataset used for the study. The in-context learning model only required six labeled examples as a reference, according to the study.
Hu said this method could be helpful by offering early detection of glaucoma.
“It does not require a very technical background,” she said. “It does not require a lot of computational resources. It can be easily adopted by ophthalmology or even the general public.”
<













Leave a Reply