top of page
Writer's pictureRodney LaLonde

Analysis of Video Retinal Angiography with Deep learning and Eulerian Magnification.

Updated: Jul 30, 2020

This study was published in the journal Frontiers in Computer Science (2020).


Objective

The aim of this research is to present a novel computer-aided decision support tool in analyzing, quantifying, and evaluating the retinal blood vessel structure from fluorescein angiogram (FA) videos.

Fundus camera (SPECTRALIS OCT)(Courtesy of Heidelberg Engineering Inc.)

FA video frames showing the flow of blood through a patient’s vasculature, where changes in the brightness level in vessels are due to blood flow. Frames (non-sequential) are ordered from top left corner to bottom right corner.

FA image of a subject with diabetes showing significant abnormalities.


Methods

The proposed method consists of three phases:

i) image registration for large motion removal from fluorescein angiogram videos,

ii) retinal vessel segmentation,

iii) segmentation-guided video magnification.

In the image registration phase, individual frames of the video are spatiotemporally aligned using a novel wavelet-based registration approach to compensate for the global camera and patient motion. In the second phase, a capsule-based neural network architecture is employed to perform the segmentation of retinal vessels for the first time in the literature. In the final phase, a segmentation-guided Eulerian video magnification is proposed for magnifying subtle changes in the retinal video produced by blood flow through the retinal vessels. The magnification is applied only to the segmented vessels, as determined by the capsule network. This minimizes the high levels of noise present in these videos and maximizes useful information, enabling ophthalmologists to more easily identify potential regions of pathology.


Results

The collected fluorescein angiogram video dataset consists of 1402 frames from 10 normal subjects (prospective study). Experimental results for retinal vessel segmentation show that the capsule-based algorithm outperforms a state-of-the-art convolutional neural networks (U-Net), obtaining a higher dice coefficient (85.94%) and sensitivity (92.36%) while using just 5% of the network parameters. Qualitative analysis of these videos was performed after the final phase by expert ophthalmologists, supporting the claim that artificial intelligence assisted decision support tool can be helpful for providing a better analysis of blood flow dynamics.


Conclusions

The authors introduce a novel computational tool, combining a wavelet-based video registration method with a deep learning capsule-based retinal vessel segmentation algorithm and a Eulerian video magnification technique to quantitatively and qualitatively analyze FA videos. To authors' best knowledge, this is the first-ever development of such a computational tool to assist ophthalmologists with analyzing blood flow in FA videos.

141 views0 comments

Recent Posts

See All

Comments


bottom of page