Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/40537
Title: A generalizable deep learning regression model for automated glaucoma screening from fundus images
Authors: Hemelings, Ruben
Elen, Bart
Schuster, Alexander K.
Blaschko, Matthew B.
Barbosa-Breda, Joao
Hujanen, Pekko
Junglas, Annika
Nickels, Stefan
White, Andrew
Pfeiffer, Norbert
Mitchell, Paul
DE BOEVER, Patrick 
Tuulonen, Anja
Stalmans, Ingeborg
Issue Date: 2023
Publisher: NATURE PORTFOLIO
Source: npj Digital Medicine, 6 (1) (Art N° 112)
Abstract: A plethora of classification models for the detection of glaucoma from fundus images have been proposed in recent years. Often trained with data from a single glaucoma clinic, they report impressive performance on internal test sets, but tend to struggle in generalizing to external sets. This performance drop can be attributed to data shifts in glaucoma prevalence, fundus camera, and the definition of glaucoma ground truth. In this study, we confirm that a previously described regression network for glaucoma referral (G-RISK) obtains excellent results in a variety of challenging settings. Thirteen different data sources of labeled fundus images were utilized. The data sources include two large population cohorts (Australian Blue Mountains Eye Study, BMES and German Gutenberg Health Study, GHS) and 11 publicly available datasets (AIROGS, ORIGA, REFUGE1, LAG, ODIR, REFUGE2, GAMMA, RIM-ONEr3, RIM-ONE DL, ACRIMA, PAPILA). To minimize data shifts in input data, a standardized image processing strategy was developed to obtain 30 degrees disc-centered images from the original data. A total of 149,455 images were included for model testing. Area under the receiver operating characteristic curve (AUC) for BMES and GHS population cohorts were at 0.976 [95% CI: 0.967-0.986] and 0.984 [95% CI: 0.980-0.991] on participant level, respectively. At a fixed specificity of 95%, sensitivities were at 87.3% and 90.3%, respectively, surpassing the minimum criteria of 85% sensitivity recommended by Prevent Blindness America. AUC values on the eleven publicly available data sets ranged from 0.854 to 0.988. These results confirm the excellent generalizability of a glaucoma risk regression model trained with homogeneous data from a single tertiary referral center. Further validation using prospective cohort studies is warranted.
Notes: Hemelings, R (corresponding author), Katholieke Univ Leuven, Dept Neurosci, Res Grp Ophthalmol, Herestr 49, B-3000 Leuven, Belgium.; Hemelings, R (corresponding author), Flemish Inst Technol Res VITO, Boeretang 200, B-2400 Mol, Belgium.
ruben.hemelings@kuleuven.be
Document URI: http://hdl.handle.net/1942/40537
ISSN: 2398-6352
e-ISSN: 2398-6352
DOI: 10.1038/s41746-023-00857-0
ISI #: 001006130000001
Rights: Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http:// creativecommons.org/licenses/by/4.0/
Category: A1
Type: Journal Contribution
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
A generalizable deep learning regression model for automated glaucoma screening from fundus images.pdfPublished version15.94 MBAdobe PDFView/Open
Show full item record

WEB OF SCIENCETM
Citations

6
checked on Apr 22, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.