Background
The advent of diagnostic methods such as optical coherence tomography (OCT) and standard automated perimetry has greatly enhanced the efficacy of identifying and monitoring ocular diseases. Although the data generated by these techniques offer valuable insights into retinal structural damage and functional vision loss, they are inherently susceptible to artifacts and noise. These factors can potentially lead to imprecise and unreliable diagnostic results.
For instance, clinicians commonly use circumpapillary RNFL thickness (RNFLT), obtained from spectral-domain OCT, as the principal structural metric for diagnosing and tracking glaucoma. However, widespread imaging artifacts often result from segmentation failures in scenarios of poor imaging quality or image defects, which frequently undermine the clinical value of RNFLT. Likewise, visual field (VF) testing, such as the 24-degree test, is a standard functional diagnostic measure for glaucoma. Yet, the impact of VF noise, causing significant variation in test-retest results, constrains its usefulness.
Moreover, the growing amount of noisy patient data presents a significant challenge to the viability and dependability of artificial intelligence (AI)-based diagnostic systems. These systems require training and evaluation on high-quality, clean medical data. It is indisputable that both clinical practice and AI system development should utilize clean, high-quality data, thereby creating a reliable and trusted ecosystem among practitioners, patients, and AI systems.
What We Do
Our team is dedicated to developing AI-based data cleaning technologies, aiming to provide cleaner data and enhance diagnostic outcomes for eye diseases. For instance, we’ve developed an innovative deep learning framework called RNFLTCorrect to address OCT artifacts. This AI system was built and assessed using a substantial population of 24,257 glaucoma patients, encompassing 111,966 RNFLT maps. RNFLTCorrect has demonstrated high accuracy in artifact correction, enhancing VF prediction and progression forecasting in glaucoma. In addition, we’re developing AI methods to reduce noise in 24-degree VF data, which can significantly enhance the structure-function correlation. This progress has the potential to assist clinicians in diagnosing and monitoring glaucoma more effectively. Check out our open-source code repositories on our Harvard Ophthalmology AI Lab GitHub account.
Selected Publications
- Shi, M., Lokhande, A., Fazli, M.S., Sharma, V., Tian, Y., Luo, Y., Pasquale, L.R., Elze, T., Boland, M.V., Zebardast, N. and Friedman, D.S., 2023. Artifact-Tolerant Clustering-Guided Contrastive Embedding Learning for Ophthalmic Images in Glaucoma. IEEE Journal of Biomedical and Health Informatics, 27(9), pp. 4329-4340.
- Shi, M., Tian, Y., Luo, Y., Elze, T. and Wang, M., 2024. RNFLT2Vec: Artifact-corrected representation learning for retinal nerve fiber layer thickness maps. Medical Image Analysis, p.103110.
- Shi, M., Sun, J.A., Lokhande, A., Tian, Y., Luo, Y., Elze, T., Shen, L.Q. and Wang, M., 2023. Artifact correction in retinal nerve fiber layer thickness maps using deep learning and its clinical utility in glaucoma. Translational Vision Science & Technology, 12(11), pp.12-12.