AI Holds Promise for Glaucoma, a Leading Global Cause of Blindness

Glaucoma, the second cause of blindness in the world, affects 3.5% of the population aged 40 years or older [1]. In 2010, 60.5 million individuals were affected by the disease, with the numbers expected to rise to 80 million by 2020 [2]. Sometimes called “the sneak thief of sight”, glaucoma progresses slowly and is largely asymptomatic; as much as 40% of vision can be lost irreversibly without a person noticing [3]. While treatments exist to halt the progression of the disease, they cannot restore vision. Early detection and timely intervention are therefore significant concerns in the clinical management of glaucoma.

human eye with measurements

Visual field tests map how well patients see throughout the visual space and are used to diagnose a variety of conditions. For example, optic nerve damage caused by glaucoma causes characteristic visual field defects in the upper and lower fields of view [4]. While other conditions can affect retinal structures in a similar manner to glaucoma, the impact on vision is often very different. Thus, these tests are an integral of the diagnostic process.

However, because these tests rely exclusively on patient feedback, they are subjective to the alertness of the patients. Time of day is known to be a factor that influences patients’ performance on these tests, where mornings are better than right after lunch [5]. As a result, one may need multiple tests to obtain an accurate measurement of any vision loss.

From a biological point of view, we know there are associations between visual function and retinal structure. Here an interesting research question emerges: can we estimate visual function directly from structures in the eye that can be imaged using non-invasive techniques? The answer is yes, as we have discovered that there is information in retina imaging data that can help to assess the presence of glaucoma.

Also Read:  Musicians Don’t Want Ticketmaster Scanning Your Face at Concerts

IBM Research, in collaboration with New York University, has conducted a study to explore this question, employing a data-driven approach using deep techniques. Our study estimates the visual field index (VFI) from a single 3D raw optical coherence tomography (OCT) image of the optic nerve with unprecedented accuracy, with Pearson correlation of 0.88 [8]. VFI is a global metric that represents the entire visual field, and accurately capturing that with AI offers to lay the groundwork for future technologies that can potentially use this analysis to quickly estimate a patient’s visual function. This could give professionals access to precise information without the need for multiple and time-intensive tests when gathering data for a glaucoma diagnosis.

Conventional OCT structural measurements, such as retinal nerve fiber layer (RNFL) thickness and ganglion cell inner plexiform layer (GCIPL) thickness, could not achieve this degree of accuracy, despite both layers being known target locations of glaucoma [6-7]. Our study suggests the structural measurements captured by OCT contain information that is highly correlated with functional measurements and could be extremely useful to professionals as they look to make a diagnosis.

Another important challenge of glaucoma is its rate of progression, which requires the careful analysis of data from multiple visits. We have addressed this issue using machine learning [13], where it has been shown that visual function test results at future visits could be forecasted. The ability to do this could one day help professionals to better predict the progression and onset of the disease and adjust treatments accordingly.

This research will be presented at the ARVO (The Association for Research in Vision and Ophthalmology) annual meeting April 28th – May 2nd in Vancouver, Canada. The IBM Research team, together with New York University, will present seven abstracts in total on various aspects of glaucoma detection and management [8-14].

You might also like More from author

Comments are closed.