Automatic Segmentation of Retinal Nerve Fiber Layer (RNFL) in Optical Coherence Tomography images using convolutinal Neural Network

Ghazale Razaghi1 *, Sedigheh Marjaneh Hejazi1

  1. Department of Medical Physics and Biomedical Engineering ,Tehran University of Medical Science, Tehran, Iran

Abstract: Retinal layer thickness as a function of spatial position evaluated in optical coherence tomography (OCT) images is a useful diagnostics marker for many retinal diseases. Automated segmentation of object boundaries is crucial for quantitative image analysis in numerous biomedical applications. Spectral-domain optical coherence tomography (SDOCT) is the most commonly used diagnostic tool for detection of structural changes from glaucoma and other non-glaucomatous optic neuropathies. SDOCT measures the thickness of the peripapillary retinal nerve fiber layer (RNFL) on a micrometer scale that is used for both diagnosis and detection of disease progression. OCT software determines the RNFL thickness by automatically segmenting the acquired images into distinct retinal layers. However, automated image segmentation algorithms can fail to accurately delineate the layers of the retina, and 19.9–46.3% of scans contain at least one segmentation artifact. Deep learning-based segmentation can learn the model and features from training data directly by minimizing loss surfaces. We developed an automatic algorithm for segmenting RNFL with high accuracy.

Methods: We designed a convolutional neural network (CNN) based framework to segment the RNFL layer and determine errors caused by artifacts. We used convolution blocks in the encoding arm and the decoding arm as a u-net architecture. We shuffled the whole 206 SDOCT images after pre-processing, then split our data set into 60% training, 20% validation, and 20% for testing. Input images and their corresponding segmentation maps are used to train the network using Adam optimizer with a learning rate of 10-4. Four metrics are used for evaluating the performance of the neural network.

Results: The models produced strong intersection over union coefficients ranging between 0.65 and 0.78 and accuracy at the final epoch was 0.9 and validation loss was equal to 0.14. Lowest mean squared error in pixels found by model best performing on validation data was 4.24.

Conclusion: U-Net demonstrates efficacy and precision in quickly generating accurate segmentations. however, image artifacts and human error are undeniable, The Deep Neural Network (DNN) method compares favorably with state-of-the-art techniques, resulting in the smallest mean unsigned error values and Accuracy.





اخبــار



برگزار کنندگان کنگره


حامیان کنگره