SciELO - Scientific Electronic Library Online

 
vol.43 número3La Impedancia Eléctrica Vaginal Detecta la Ventana Fértil en Mujeres Sanas: un Estudio PilotoTomografía por Impedancia Eléctrica para Medir Parámetros de Espirometría en Pacientes con Enfermedad Pulmonar Obstructiva Crónica índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Revista mexicana de ingeniería biomédica

versión On-line ISSN 2395-9126versión impresa ISSN 0188-9532

Rev. mex. ing. bioméd vol.43 no.3 México sep./dic. 2022  Epub 28-Abr-2023

https://doi.org/10.17488/rmib.43.3.2 

Research articles

Segmentation of OCT and OCT-A Images using Convolutional Neural Networks

Segmentación de imágenes de OCT y OCT-A por medio de Redes Neuronales Convolucionales

Fernanda Cisneros-Guzmán1 
http://orcid.org/0000-0001-9216-6750

Manuel Toledano-Ayala1 
http://orcid.org/0000-0003-1885-279X

Saúl Tovar-Arriaga1 
http://orcid.org/0000-0002-2695-1934

Edgar A. Rivas-Araiza1 
http://orcid.org/0000-0002-0300-6462

1Universidad Autónoma de Querétaro - México.


ABSTRACT

Segmentation is vital in Optical Coherence Tomography Angiography (OCT-A) images. The separation and distinction of the different parts that build the macula simplify the subsequent detection of observable patterns/illnesses in the retina. In this work, we carried out multi-class image segmentation where the best characteristics are highlighted in the appropriate plexuses by comparing different neural network architectures, including U-Net, ResU-Net, and FCN. We focus on two critical zones: retinal vasculature (RV) and foveal avascular zone (FAZ). The precision obtained from the RV and FAZ segmentation over 316 OCT-A images from the OCT-A 500 database at 93.21% and 92.59%, where the FAZ was segmented with an accuracy of 99.83% for binary classification.

KEYWORDS: OCT-A segmentation; ResU-Net; FCN segmentation; Convolutional Neural Network

RESUMEN

La segmentación juega un papel vital en las imágenes de angiografía por tomografía de coherencia óptica (OCT-A), ya que la separación y distinción de las diferentes partes que forman la mácula simplifican la detección posterior de patrones/enfermedades observables en la retina. En este trabajo, llevamos a cabo una segmentación de imágenes multiclase donde se destacan las mejores características en los plexos apropiados al comparar diferentes arquitecturas de redes neuronales, incluidas U-Net, ResU-Net y FCN. Nos centramos en dos zonas críticas: la segmentación de la vasculatura retiniana (RV) y la zona avascular foveal (FAZ). La precisión para RV y FAZ en 316 imágenes OCT-A de la base de datos OCT-A 500 se obtuvo en 93.21 % y 92.59 %. Cuando se segmentó la FAZ en una clasificación binaria, con un 99.83% de precisión.

PALABRAS CLAVE: Segmentación OCT-A; ResU-Net; segmentación FCN; Red neuronal convolucional

INTRODUCTION

The retina is located at the back of the eye and is a light-sensitive layer of tissue; its characteristic color is reddish or orange due to the number of blood vessels behind it. The macula is the yellowish area of the retina where vision is most affected and consists of the fovea, which is only a tiny part of the retina but is crucial to enable many visual functions, and by the foveal avascular zone (FAZ), the which allows the human being to detect light without dispersion or loss [1].

Optical coherence tomography (OCT) is an imaging technique with micrometer resolution, which captures two-dimensional and three-dimensional images using low-coherence light from optical scattering media. The output images allow information about the retina, such as its structure, consistency, and thickness. It allows the diagnosis, monitoring, and control of retinal diseases through high-resolution images.

Optical coherence tomography angiography (OCT-A) is a modern, non-invasive technology that, like OCT, allows retina imaging. The main advantage of OCT-A is that it does not require the application of a contrast agent to obtain deep and superficial images of the retinal blood vessels (RV). The FAZ is a central nonvascular area. Previous studies can obtain the images of the projection maps (superficial, internal, and deep). It is necessary to point out that each plexus has different morphological characteristics; therefore, they are visualized both on the surface and in the internal plexus [2].

In this article, a comparison between the plexus segmentation taken with different imaging modalities, namely OCT and OCT-A, is first presented to identify which of the two imaging techniques offers the best conditions for segmentation. The main characteristics of the plexus are the ZAF and the RV. Once the best modality has been identified, we will only take the copies of it to carry out the segmentation corresponding to the characteristics sought. For the FAZ, a comparison between two convolutional networks, U-Net and ResU-Net, is used, and for the RV characteristic, a Fully Convolutional Network (FCN) and U-Net. Both factors differ in one of the networks to be compared because the FCN has presented better results in terms of detail at the edges of the characteristics sought to be segmented.

Related Work

The semantic segmentation task is one of the essential tasks in computer vision in recent decades and even more so if we focus on medical images since they have become an aid in detecting diseases. It has been proven that using neural networks to perform this task yields good results. However, when we talk about semantic segmentation, it is prevalent that convolutional networks are used from start to finish.

The development of these networks for medical images has been used in countless applications, whether in the eyes, chest, abdomen, brain, or heart. The database used for this article is from the retina, that is, from the human eye. Therefore, we will contribute with state of the art of retina analysis. One of the challenges is the segmentation of veins or blood vessels. It depends on the technique to obtain an image, such as an eye fundus image or a tomography taken to some specific area. In [3] [4], use a residual network based on U-Net using a batch normalization and an automatic encoder named PixelBNN, which segments the veins in fundus images and is tested in various databases to achieve acceptable results; down testing time was their contribution. For Xiao et al. and Son J et al., the contribution to RV focused on reducing the edges of the veins, making them more precise and sharper [5] [6]. Other ocular applications in the area of glaucoma where the main task is to segment the optic disc, in [7] [8] use a ResU-Net and propose using a modified FCN; the image takes to center in the optic disk to reduce noise in fundus images.

RV segmentation

OCT-A is a non-invasive technique developed and applied relatively a few years ago, through which the vascular structures and choroids can be visualized in the retina. In turn, the segmentation of retinal vessels, the obtained vasculature through this technique, is an available research opportunity to make a difference from the methods studied for more than 20 years [9] [10] [11], such as those based on techniques such as deep learning, filters, and some classifiers for VR segmentation in fundus images. Implemented the method used in image thresholding in [4] [5] to determine the density of the veins.

However, vector and exponential flow optimization algorithms have been among the best algorithms implemented in OCTA images for RV enhancement and segmentation. The implementation based on Gibbs Markov and its field model in [12] [13] [14] was applied in the projection maps for the RV segmentation to segment retinal vessels into different projection maps.

The papers of [8] [9] [10] proposed a modified U-Net for circle segmentation in a maximum intensity projection of OCT-A. Wylęgała et al. present in [11] a new partition-based modular array to detect thick and thin vessels separately.

FAZ segmentation

The FAZ is the fovea region where the absence of blood, the vasculature, can be observed. A new field of study is the segmentation of the FAZ through OCT-A images compared to retinal vascular segmentation. The authors of the papers [12] [13] [14] introduced an active contour model for FAZ detection. Li et al. and Bates et al. present in [15] [16] a series of morphological factors to identify FAZ candidates in OCTA projections using two types of fields of view. A modified U-Net to segment the FAZ on the OCTA projection map of the superficial retina was introduced in the papers [9] [16]. Through a CNN, Azzopardi et al. develops a segmentation for the FAZ and the nonischemic capillary region [12].

MATERIALS AND METHODS

Dataset

The plexuses database deployed in this work is the OCTA-500 [15], which consists of images collected from 500 subjects taken under identical conditions (projection maps). The retina image size is 3mm x 3mm, with 304px by 304px collected from 200 subjects, and 6mm x 6mm images, with 400px by 400px for the remaining 300 subjects. Images with the same number and characteristics were collected for OCT and OCT-A.

Figure 1 shows the OCT and OCT-A projection maps from the OCTA-500 database. The corresponding OCT projections and the full OCT-A projection correspond to 3D volumes from averaging in the axial direction. On the other hand, the Inner Limiting Membrane (ILM) - Outer Plexiform Layer (OPL) projections, as well as the OPL - Brunch Membrane (BM), respond to a maximum in the same direction.

Figure 1 Projection maps for each of the techniques (All images are retrieved from the database [15]

U-Net

U-Net is a network based on a "Fully Convolutional Network" (FCN) which was designed, tested, and presented in 2015 mainly for the segmentation of biomedical images. The value and reliability that U-Net offers, with its 23 convolutional layers, has been shown in [17] for the field of images of medical origin. A network that does not entirely connect all its layers and when taking the shape of a "U" reflects almost a symmetry in the expansive and contraction paths.

Due to the nature of the network in this article, it was decided to use the same architecture proposed in [17] [18], which can be defined in general terms as a network formed by a series of encoders located in the first half of the network, which allows the data to be contextualized, input is called the contraction path. This first part of the network is built from unpadded convolutions, followed by a rectified linear unit activation layer ReLU [19] for each previous convolution.

Finally, the resolution reduction applies a maximum grouping operation. In the second half, the expansion path, a series of decoders perform the exact location of the network by determining it from an upsampling of the function map, followed by an up-convolution, cropping, and two convolutions that, like in the first half it is followed by a ReLU layer for each of them.

Deep ResU-Net

ResU-Net is a semantic segmentation neural network that merges the strengths of the residual neural networks and the U-Net, thus obtaining a Deep ResU-Net. This combination provides two advantages: 1) the residue facilitates the formation of meshes; 2) it facilitates the lossless transmission of information between the low and high levels of the network and within a residual unit with the elimination of connections, which allows for designing a neural network with fewer parameters [20] [21]. It can achieve similar performance and is consistently better than semantic segmentation.

The architecture of the ResU-Net network comprises 56 convolutional layers; the network proposal by [20] solves the degradation of the model in the deeper layers.

A model builds to the base of the pair of convolutional layers corresponding to the expansive path in the U-Net; a residual learning block replaces these. A residual learning block comprises three convolutional layers, the first two with a ReLU activation and three batch normalization layers [21]. Hence the ResU-Net on the expansive path includes an input unit, a central unit, and a residual unit; the contraction path comprises four concatenated blocks, adding a block and an output unit.

Fully Convolutional Network

A Fully Convolutional Network (FCN) can be used for semantic segmentation, which means it uses convolution and deconvolution layers. For the development of this research, a FCN network was developed and tested by [22] based on a VVG-16 as the authors mention higher accuracy compared to other FCN based on either AlexNet or GoogLeNet. Three variations are known for the FCN network, FCN8s, FCN16s, and FCN32s; in our case, a FCN8s network is used. The discrepancy between these three networks lies in the resolution reduction because the semantic information requires a detailed level of recovery in the connection of the intermediate layers.

Evaluation Metrics

It is essential to mention that due to the nature of the problem in the segmentation of both the FAZ and the RV, the background of the image has a more significant presence than each of the parts to be segmented. Therefore, the metrics [23] chosen to evaluate the segmentation performance of each network quantitatively are established in Equations (1) - (5):

• Dice Coefficient (DICE):

2TP(2TP+FP+FN) (1)

• Jaccard index (JAC):

TP(TP+FP+FN) (2)

• Balance-precision (BACC):

TPR+TNR (3)

• Precision (PRE):

TP(TP+FP) (4)

• Recovery (REC):

TP(TP+FN) (5)

These metrics allow comparing our models with the current state-of-the-art. TPR refers to the true positive rate, TNR negative rate, TP to True Positive, and FN to False Negative.

Manual Segmentation

The ground truth (GT) used to validate the results of the following training sessions is one for each feature.

The ground truth (GT) used to validate the results differs from the given to the database. A manual modification is made to the image provided where the features are isolated, and the original GT has both features FAZ and RV. Figure 2 shows the segmentation of the FAZ; the GT is unique for this feature, doing away with the RV and only placing a black background in the rest of the image. This same process is carried out for the GT of the RV.

Figure 2 a) GT OCTA-500; b) GT of FAZ; c) GT of RV. 

Process

Model training is performed using 80% of the images from the full projection, which correspond to 400 photos from the OCTA-500 database, and the remaining 20% was used for verification. Resizing 304 px by 304 px is applied for the test and training set to keep data in standard dimensions. This procedure is performed under the same conditions and characteristics of retinal imaging.

It uses the U-Net network while preserving the structure and hyperparameters of the original model using the Stochastic Gradient Optimizer (SGD). The training is performed for 50 epochs for both databases. Figure 3 shows the evolution of the proposed methodology in detail.

Figure 3 Metodology. 

Additional training is carried out with a technique that performs a better segmentation of the validation image and compares it with the ground truth, provided and validated by the same database used (OCTA500). OCT-A technology provides the best measurements; however, the goal was to segment the FAZ, and in the results, blood vessels can be observed in the full segmented view of both techniques. In addition, manual segmentation was performed in the target image to enhance the above segmentation, leaving only the visible FAZ, ignoring the vascular system. It is important to note that this training is conducted under conditions similar to those previously applied.

Two projection maps from the database were mainly used for RV segmentation. One of them is the maximum projection map of the retina and the full projection map. These two maps can be found for the two imaging techniques, that is, OCT and OCT-A, which allows a more significant number of examples for training.

Technical Considerations

All experimentation is done in TensorFlow using an NVIDIA GeForce GTX 1080Ti GPU. The standard normal initialization method is used to initialize the network with a difference of 0.02 batch size of 3 as a cross-entropy loss function, and Adam as the optimizer was implemented.

Overtraining is avoided by evaluating the similarity between the validation and training sets throughout the network training process. Further choosing the training model with the best performance and trying to have the best value of this similarity by calculating the dice coefficient.

RESULTS AND DISCUSSION

It is observed that the OCT image produced poor segmentation compared to the OCT-A image. It is essential to point out that this segmentation is not acceptable because the distinction between the FAZ and the veins cannot be differentiated in the image; only an image with a black background and some white reliefs are observed. In Figure 4, we support this, and the applied metrics are presented after a comparison.

Figure 4 Resulting Image. OCT: a) Full projection, b) Ground Truth, c) U-Net-FAZ. OCT-A: d) Full projection, e) Ground Truth, f) U-Net-FAZ. 

In the case of OCT, in a visual evaluation of Figure 4, it is impossible to locate the FAZ or the veins in the resulting image (c). Despite showing an accuracy of 88.42%, there is no correct segmentation. High precision in the OCT images is due to the nature of the images; that is, we can see that the validation image (b) has a black background, which is why it shows an acceptable precision when evaluating. In the case of OCT-A, in the resulting image (c), it is possible to visualize the two elements of interest, the FAZ, and the veins. Even though errors are shown in the segmentation in said segmentation, the clarity is apparent in the image, and it is evident.

Table 1 and Figure 5 compare the network implementation of two data sets prepared in the same condition. As we can see, OCT-A retinal imaging presents better results.

Table 1 Validation Metrics. 

Metric OCT OCT-A
Loss 0.3164 0.1627
Accuracy 88.42% 97.77%
AUC 87.71% 92.04%

Figure 5 Comparison of the loss OCT vs. OCT-A6. 

Earlier mentioned, a second training cycle (Table 2) was applied to this technique. The main change is that the mask does not contain vessels (only the FAZ), our area of interest.

Table 2 Validation Training-Second Training. 

Metric OCT-A
Loss 0.1194
Accuracy 99.83%
AUC 98.74%

Tested this second training and the architectures, the results show us that in the case of the FAZ, the metrics validate that the best segmentation was through training with the ResU-Net network. Furthermore, visual assessment (Figure 6) is confirmation that the feature is fully segmented.

Figure 6 FAZ ResU-Net Segmentation: a) Original image; b) Grand Truth; c) Segmentation obtained. 

Furthermore, the performance obtained from the networks in the validation data is shown in Table 3. The ResU-Net network is observed to be better than the U-Net under the abovementioned metrics.

Table 3 Metrics comparison of different methods for FAZ segmentation. 

Issue Method DICE (%) JAC (%) BACC (%) PRE (%) REC (%)
FAZ U-Net 85.18 76.43 92.65 88.07 85.44
ResU-Net 89.99 89.36 93.63 92.59 93.61

On the other hand, for the veins, the network that gives the best results after the training is the FCN shown in Figure 7, where it is possible to observe in the same way as in the case of the FAZ. In this isolated segmentation, the mask (GT) only shows the blood vessels, and the result of this change gives a segmentation with the distinction of the veins in the image.

Figure 7 Retina comparison of blood vessel segmentation. 

Figure 7 and Table 4 correspond to the RV feature, and we can observe and confirm that the FCN segmentation is the have the best metrics and is also affirmed under the visual test segmentation.

Table 4 Metrics comparison of different methods for RV segmentation. 

Issue Method DICE (%) JAC (%) BACC (%) PRE (%) REC (%)
RV U-Net 80.43 70.75 88.70 91.35 82.96
ResU-Net 86.26 78.26 91.78 93.21 86.57

CONCLUSIONS

According to our segmentation results, it was better to implement a DL model separately for each characteristic. This may be because the characteristics differ a lot from each other. For each neural network, RV segmentation and FAZ segmentation were implemented separately. Manual segmentation was performed on the ground truth (GT) images. Using GT images with isolated segmentation allows the network to perform validation that only focuses on the desired feature. We compare state-of-the-art segmentation methods, including U-Net, ResU-Net, and FCN, for the FAZ segmentation. ResU-Net gives us a similarity percentage with the actual segmentation image of almost 90%. As for the RVs, the similarity is close to 86% using the FCN network. The obtained results are satisfactory enough to be used in the application or detection of these patterns for disease detection.

REFERENCES

[1] Wons J, Pfau M, Wirth MA, Freiberg FJ, et al. Optical coherence tomography angiography of the foveal avascular zone in retinal vein occlusion. Ophthalmologica [Internet]. 2016; 235:195-202. Available from: https://doi.org/10.1159/000445482 [ Links ]

[2] Guo M, Zhao M, Cheong AMY, Dai H, et al. Automatic quantification of superficial foveal avascular zone in optical coherence tomography angiography implemented with deep learning. Vis Comput Ind Biomed Art [Internet]. 2019;2:21. Available from: https://doi.org/10.1186/s42492-019-0031-8 [ Links ]

[3] Leopold HA, Orchard J, Zelek JS, Lakshminarayanan V. Pixel BNN : Augmenting the Pixelcnn with Batch Normalization and the Presentation of a Fast Architecture for Retinal Vessel Segmentation. J Imaging [Internet]. 2019;5(2):26. Available from: https://doi.org/10.3390/jimaging5020026 [ Links ]

[4] Zhang Y, Chung ACS. Deep Supervision with Additional Labels for Retinal Vessel Segmentation Task. In: Frangi A, Schnabel J, Davatzikos C, Alberola-López C, et al. (eds). Medical Image Computing and Computer Assisted Intervention- MICCAI 2018 [Internet]. Granada, Spain: Springer; 2018:83-91. Available from: https://doi.org/10.1007/978-3-030-00934-2_10 [ Links ]

[5] Xiao X, Lian S, Luo Z, Li S. Weighted Res-UNet for High-Quality Retina Vessel Segmentation. In: 2018 9th International Conference on Information Technology in Medicine and Education (ITME) [Internet]. Hangzhou, China: IEEE; 2018: 327-331. Available from: https://doi.org/10.1109/ITME.2018.00080 [ Links ]

[6] Son J, Park SJ, Jung KH. Towards Accurate Segmentation of Retinal Vessels and the Optic Disc in Fundoscopic Images with Generative Adversarial Networks. J Digit Imaging [Internet]. 2019;32(3):499-512. Available from: https://doi.org/10.1007/s10278-018-0126-3 [ Links ]

[7] Jayabalan GS, Bille JF. The Development of Adaptive Optics and Its Application in Ophthalmology. In: Bille J. (eds). High Resolution Imaging in Microscopy and Ophthalmology [Internet]. Cham, Switzerland: Springer; 2019: 339-358p. Available from: https://doi.org/10.1007/978-3-030-16638-0_16 [ Links ]

[8] Taher F, Kandil H, Mahmoud H, Mahmoud A, et al. A Comprehensive Review of Retinal Vascular and Optical Nerve Diseases Based on Optical Coherence Tomography Angiography. Appl Sci [Internet]. 2021;11(9):4158. Available from: https://doi.org/10.3390/app11094158 [ Links ]

[9] Spaide RF, Fujimoto JG, Waheed NK, Sadda SR, et al. Optical coherence tomography angiography. Vol. 64, Prog Retin Eye Res [Internet]. 2018;64:1-55. Available from: https://doi.org/10.1016/j.preteyeres.2017.11.003 [ Links ]

[10] de Carlo TE, Romano A, Waheed NK, Duker JS. A review of optical coherence tomography angiography (OCTA). Int J Retin Vitr [Internet]. 2015;1:5. Available from: https://doi.org/10.1186/s40942-015-0005-8 [ Links ]

[11] Wylęgała A, Teper S, Dobrowolski D, Wylęgała E. Optical coherence angiography: A review. Medicine [Internet]. 2016;95(41):e4907. Available from: https://doi.org/10.1097/MD.0000000000004907 [ Links ]

[12] Azzopardi G, Strisciuglio N, Vento M, Petkov N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med Image Anal [Internet]. 2015;19(1):46-57. Available from: https://doi.org/10.1016/j.media.2014.08.002 [ Links ]

[13] Lau QP, Lee L, Hsu W, Wong TY. Simultaneously Identifying All True Vessels from Segmented Retinal Images. IEEE Trans Biomed Eng [Internet]. 2013;60(7):1851-1858. Available from: https://doi.org/10.1109/tbme.2013.2243447 [ Links ]

[14] Ghazal M, Al Khalil Y, Alhalabi M, Fraiwan L, et al. 9-Early detection of diabetics using retinal OCT images. In: El-Baz AS, Suri JS (eds). Diabetes and Retinopathy [Internet]. United States: Elsevier; 2020. 173-204p. Available from: https://doi.org/10.1016/B978-0-12-817438-8.00009-2 [ Links ]

[15] Li W, Zhang Y, Ji Z, Xie K, et al. IPN-V2 and OCTA-500: Methodology and Dataset for Retinal Image Segmentation, distributed by Cornell University [Internet]. 2020. Available from: https://doi.org/10.48550/arXiv.2012.07261 [ Links ]

[16] Bates NM, Tian J, Smiddy WE, Lee W-H, et al. Relationship between the morphology of the foveal avascular zone, retinal structure, and macular circulation in patients with diabetes mellitus. Sci Rep [Internet]. 2018;8:5355. Available from: https://doi.org/10.1038/s41598-018-23604-y [ Links ]

[17] Ronneberger O, Fischer P, Brox T. U-net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds). Medical Image Computing and ComputerAssisted Intervention - MICCAI 2015 [Internet]. Munich, Germany: Springer; 2015: 234-241. Available from: https://doi.org/10.1007/978-3-319-24574-4_28 [ Links ]

[18] Sappa LB, Okuwobi IP, Li M, Zhang Y, et al. RetFluidNet: Retinal Fluid Segmentation for SD-OCT Images Using Convolutional Neural Network. J Digit Imaging [Internet]. 2021;34(3):691-704. Available from: https://doi.org/10.1007/s10278-021-00459-w [ Links ]

[19] Rasamoelina AD, Adjailia F, Sinčák P. A Review of Activation Function for Artificial Neural Network. In: 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI) [Internet]. Herlany, Slovakia: IEEE; 2020: 281-286. Available from: https://doi.org/10.1109/SAMI48414.2020.9108717 [ Links ]

[20] Qi W, Wei M, Yang W, Xu C, et al. Automatic Mapping of Landslides by the ResU-Net. Remote Sens [Internet]. 2020;12(15):2487. Available from: https://doi.org/10.3390/rs12152487 [ Links ]

[21] Ioffe S, Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In: ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 [Internet]. Lille, France: JMLR; 2015: 448-456. Available from: https://dl.acm.org/doi/10.5555/3045118.3045167 [ Links ]

[22] He Y, Carass A, Liu Y, Jedynak BM, et al. Fully Convolutional Boundary Regression for Retina OCT Segmentation. In: Shen D, Liu T, Peters TM, Staib LH, et al. (eds). Medical Image Computing and Computer Assisted Intervention - MICCAI 2019. MICCAI 2019 [Internet]. Shenzhen, China: Springer; 2019: 120-128. Available from: https://doi.org/10.1007/978-3-030-32239-7_14 [ Links ]

[23] Liu X, Song L, Liu S, Zhang Y. A review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability [Internet]. 2021;13(3):1224. Available from: https://doi.org/10.3390/su13031224 [ Links ]

Received: May 25, 2022; Accepted: August 31, 2022

Corresponding author: TO: Saúl Tovar Arriaga INSTITUTION: Universidad Autónoma de Querétaro ADDRESS: Cerro de las Campanas S/N, Col. Las Campanas, C. P. 76010, Santiago de Querétaro, Querétaro, México CORREO ELECTRÓNICO: saul.tovar@uaq.mx

AUTHOR CONTRIBUTION

M.F.C.G. conceptualized the project, collected, gathered and curated data, designed and developed the methodology and modelling, participated in the design of software, and in the writing of the manuscript. S.T.A. conceptualized the project, curated data and designed and developed the methodology and modelling, wrote original draft. M.T.A. conceptualized the project, reviewed, and edited final version of the manuscript. E.A.R.A. reviewed and edited final version of the manuscript. All authors reviewed and approved the final version of the manuscript.

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License