SciELO - Scientific Electronic Library Online

 
vol.11 número22Influencia de variables climáticas en el contenido de N en Carya illinoensis KochMecanismos de transferencia de tecnología como elementos del fortalecimiento del conocimiento acumulado en la industria biofarmacéutica mexicana: El Caso de la UDIBI - IPN índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Nova scientia

versión On-line ISSN 2007-0705

Nova scientia vol.11 no.22 León may. 2019

https://doi.org/10.21640/ns.v11i22.1872 

Ciencias naturales e ingenierías

A proposal to measure the similarity between retinal vessel segmentations images

Una propuesta para medir la similaridad entre imágenes segmentadas de la red vascular de la retina

Marco A. Escobar1 

José R. Guzmán Sepúlveda2 

Jorge R. Parra Michel3 

Rafael Guzmán Cabrera4  * 

1 Universidad De La Salle Bajío, León, Guanajuato, México.

2 CREOL, The College of Optics and Photonics, University of Central Florida.

3 Universidad De La Salle Bajío, León, Guanajuato, México.

4 Universidad de Guanajuato, Departamento de Ingeniería Eléctrica.


Abstract

Introduction:

We propose a novel approach for the assessment of the similarity of retinal vessel segmentation images that is based on linking the standard performance metrics of a segmentation algorithm, with the actual structural properties of the images through the fractal dimension.

Method:

We apply our methodology to compare the vascularity extracted by automatic segmentation against manually segmented images.

Results:

We demonstrate that the strong correlation between the standard metrics and fractal dimension is preserved regardless of the size of the subimages analyzed.

Discussion or Conclusion:

We show that the fractal dimension is correlated to the segmentation algorithm’s performance and therefore it can be used as a comparison metric.

Keywords: image processing; segmentation; fractal dimension; similarity measurement

Resumen

Introducción:

Proponemos una nueva metodología para establecer la similitud entre imágenes segmentadas de la red vascular de la retina. El método está basado en vincular métricas estándar con propiedades estructurales a través de la dimensión fractal.

Método:

Aplicamos nuestra metodología para comparar la vascularidad de imágenes de la retina extraídas de forma automática contra segmentaciones hechas de forma manual.

Resultados:

Demostramos que existe una fuerte correlación entre las métricas estándar y la dimensión fractal y que esta prevalece incluso si se divide la imagen en sub-imágenes.

Discusión o Conclusión:

Mostramos que la dimensión fractal esta correlacionada con el rendimiento del algoritmo de segmentación y por tanto puede ser usado como métrica.

Palabras clave: procesamiento de imágenes; segmentación; dimensión fractal; medición de similaridad

Introduction

Digital retinal images are used in the diagnosis of some ophthalmic pathologies, such as diabetic retinopathy. Common approaches to identify retinal pathological conditions rely on comprehensive dilated eye exams such as visual acuity, tonometry (internal eye pressure), pupil dilation, and optical coherence tomography (Fong et al., 2004; Lee, Wong, & Sabanayagam, 2015). In these approaches, the physicians search for a number of indicators including progressive changes in the retinal vasculature, leaking vessels, signs for potential vessel leakage (fat deposits), pupil structural integrity, damage of the nerve tissue, among others. Unfortunately, all these indicators manifest clearly only until advanced stages. It is the identification of minute changes in the retinal vasculature which can allow for reliable diagnosis at early stages. Thus, suitable computer-aided diagnosis tools are needed for the automated image detection of retinal pathology, and in general for the segmentation of the vascular tree, as it is an important indicator not only of diabetes-related conditions but also for the diagnosis, screening, treatment, and evaluation of various cardiovascular, neurovascular, and ophthalmologic diseases such as hypertension, arteriosclerosis, and choroidal neovascularization (Abràmoff et al., 2010; Muangnak et al., 2015; O’Hara, 2004).

According to the World Health Organization, diabetes is at epidemic levels worldwide, and developing countries face the greatest risk (King & Rewers, 1993; Popkin et al., 2012). In those countries, a number of public health issues are rooted in the high levels of obesity that are now one of their typical characteristics. For instance, in Mexico, diabetes is at pandemic levels, data from 2016 suggested that the prevalence of diagnosed diabetes increased from 7.6% to 9.4% in ten years (Arredondo, 2018; Instituto Nacional de Salud Pública, 2016). A common consequence of chronically high blood sugar levels from diabetes is the damage of the retinal vasculature, which leads to diabetic retinopathy, and eventually to blindness (Fong et al., 2004). In this regard, diabetic retinopathy is the most frequent cause of vision loss worldwide (Lee et al., 2015) and, in the particular case of developing countries, it is the first cause of blindness in working-age adults (Cervantes-Castañeda et al., 2010).

The assessment of anomalies in retinal vessels used to be a time-consuming task since high skilled technicians were required to assess the images and the diagnostic was based on their experience (Niemeijer et al., 2004). This methodology presented serious drawbacks due to i) the scarce availability of such skilled technicians, and ii) the diagnostic was prone to appreciation errors. For these reasons, there has been an increasing interest in the development of automatic of assessment techniques (Fathi & Naghsh-Nilchi, 2013; Kuri, 2015; Siddalingaswamy & Prabhu, 2010). These new tools do not intend to deprecate medical diagnosis but to contribute to it. At the same time, these implementations have to be efficient regarding computation time and data management to reduce the human workload, to overcome the bottlenecking problems associated with screening programs, and to enable high-throughput workflows (Jelinek & Cree, 2009). Moreover, those automatic tools are desired to operate based on self-contained metrics, in order to eliminate human biases. However, one of the biggest challenges is the determination of the accuracy in the vessels detection in the presence of discontinuities in the vascular structures (Gegundez-Arias et al., 2012; Yan et al., 2018).

In this study, a method for comparison of segmented retinal blood vessel images is presented. The method is used to compare the vessel segmentation obtained by a simple automatic segmentation against manually segmented images. Our approach is based on linking the standard performance metrics, which are calculated blindly from the outcomes of a generic segmentation algorithm, with the actual structural properties of the images by using the fractal dimension (FD). Unlike other studies, here we do not attempt to give a diagnosis relying on the value of the FD, but to use it as an auxiliary figure of merit to optimize the performance of the segmentation algorithm. The methodology presented i.e., correlations between structure and segmentation metrics, is general and it does not depend on the particular segmentation algorithm used. Standard linear regression analysis shows that some of the standard metrics strongly depend on the image complexity regardless of the sub-regions in which the original image is divided. Our results show that the FD of the bit-wise difference between two images contains information statistically insightful on the algorithm’s capability to segment the vasculature.

Method

A plethora of techniques, algorithms, and methodologies for the segmentation of retinal blood vessels can be found in the literature (Abràmoff et al., 2010; Fathi & Naghsh-Nilchi, 2013; Felkel et al., 2001; Garg & Gupta, 2016; Jelinek & Cree, 2009; Kolar et al., 2013; Kuri, 2015; Siddalingaswamy & Prabhu, 2010). A particularly useful reference for the present work is Ref. (Vostatek et al., 2017) as it includes algorithms (both supervised and unsupervised) with publicly available implementations. Often in solving a segmentation problem, multiple techniques are used together. Thus, a unique classification of the different algorithms is not feasible. Based on the image processing methodology employed, the techniques used for the segmentation of the retinal vascularity can be classified in a more or less consistent manner as supervised and unsupervised classification methods, matched filtering approaches, morphological operations, and deformable models (Garg & Gupta, 2016). Other more specific categorizations propose vessel tracking, multi-scale approaches, and vessel profiling as independent classifications from the more general morphology-based techniques (Fraz et al., 2012; Kirbas, C. Quek, 2004), and intensity-based segmentation algorithm (Gonzalez & Woods, 2001). However, we would like to emphasize that our approach is not exclusive to a particular algorithm and the same methodology i.e., correlation analysis between standard performance metric and FDs, can be used irrespectively of the segmentation algorithm of choice.

The retinal images are from the DRIVE database (Niemeijer et al., 2004), which contains TIFF color images of 565 × 584 pixels. For each original, colored image, this database provides two manually labeled masks (binary images) where the retinal vasculature is indicated. Ideally, these two reference masks should be equal, however, as it will be shown later, they are not. The first step is to split the red, green, and blue channels of the original image. Fig. 1 shows the original image and the grayscale images of the red, green, and blue channels, respectively. In our case, only the green channel was used since it provides the best contrast with respect to the others.

Fig. 1 The original image (a), grayscale images of the (b) blue, (c) green, and (d) red channels respectively 

Despite inherently providing the best contrast, the green channel can be improved, since in some regions information can be lost due to over-brightness (see Fig. 2). To solve this problem, the CLAHE adaptive histogram equalization is used (Pizer et al., 1987; Zuiderveld, 1994). The image is divided into 17×17 sub-images, and then each of these blocks is histogram equalized, with a contrast limit of 2. In other words, if any histogram bin is above the specified contrast limit, those pixels are clipped and distributed uniformly to other bins before the histogram equalization. After equalization, a bilinear interpolation is applied in order to remove artifacts in the tile borders. Then, a Gaussian blurring is performed over the image (see Fig. 2(c)).

Fig. 2 Comparison between: (a) the green channel, (b) adaptive histogram equalized green channel, (c) Gaussian blurred image 

The main goal of this process is to reduce the effects of high-contrast pixelation, which can be clearly noticed in Fig. 2(b). The Gaussian filter is applied using a mirror of 9 × 9 pixels, and the corresponding standard deviation of 2. The resulting image is further enhanced by applying a Laplacian operator and then performing a bit-wise subtraction to the blurred image. The last step is a Gaussian adaptive thresholding operation using a block size of 27 pixels. At this stage, morphological operations like erosion and dilation can be applied to improve the image in the case of pixelation. A summary of the algorithm is shown in Fig. 3.

Fig. 3 Schematic overview of the segmentation method used for these tests. The method comprises of green channel extraction, histogram equalization, Gaussian blurring, image enhancing and thresholding 

Next, we describe our method for the comparison of segmented retinal blood vessel images that is based on linking the standard performance metrics with the structural properties of the images through the FD. Our approach can be readily extended to a more complete assessment by including metrics of the connectivity, area, and length of the segmented vessels (Aquino, Gegúndez, Bravo, & Marín, 2010; Garg & Gupta, 2016; Kolar et al., 2013).

Fig. 4 shows an example, from one of the images processed, of the original retinal picture, the automatically obtained image (generic algorithm; described in the next section), and the manually labeled images. It should be clear that despite the binary images look similar; they are not the same.

Fig. 4 Comparison of the different vascular images, (a) original image, (b) automatically extracted, (c) manually-labeled mask 1, (d) and manually-labeled mask 2 

For our numerical experiments, we made use of the retinal images from the DRIVE database (Staal et al., 2004). Quantitative evaluation of the algorithm’s performance and the accuracy of the extraction of the vascular tree was done based on the difference between images resulting from the bit-wise (pixel-based) subtraction with the manual mask 2 as a reference, i.e. the difference between the binary images in Fig. 4.

Fig. 5 shows the difference images obtained between the automatically extracted image and the manually labeled 2 (between panels (b) and (d) in Fig. 4), as well as between the manually labeled images (between panels (c) and (d) in Fig. 4). There are important aspects that need to be highlighted. One can clearly see that there can be significant differences between the manually labeled images. These differences occur mainly in the small vessels, which are inherently difficult to identify and extract. More importantly, it can be seen that the small vessels in the difference image actually preserve their structure which means that even the manually labeled images can miss entire vascular ramifications. Realizing the existence of these differences between manually labeled images is of critical importance as they are used as the reference in automated processing. Conversely, when we compare the outcome of generic algorithm with the manually-labeled mask 2 (Fig. 5(a)), there is certainly an error, but there are not well defined vascular structures, and the white pixels are more uniformly distributed over the image. Thus, the algorithm can recover vascular structures, including thin vessels, with a small error, i.e. only short portions of the ramifications go undetected.

Fig 5. The bit-wise difference (pixel based subtraction) between the automatically extracted (a) and manually-labeled mask 2, and between the two manually labeled masks (b) 

Both the segmentation algorithm and the evaluation algorithm were programmed in Python using OpenCV libraries, and the retina images and manual segmentations are taken from the DRIVE database.

Results

Standard metrics are commonly used to quantify the performance of the classifiers implemented for the extraction of retinal blood vessels from the fundus image. Given the nature of pixel-based classification i.e., whether the pixel belongs to a vessel or the surrounding tissue, four possible events can take place including correct and incorrect classifications. These are the so-called true positive (TP), true negative (TN), false positive (FP), and false negative (FN). TP and TN correspond to the correct classification of a pixel as part of a vessel or background, respectively, while FP and FN refer to the pixel’s misclassification. These four classification labels can then be used to construct the following performance metrics:

  1. TPR (True Positive Ratio) is the ratio of pixels correctly detected as vessel pixels to the number of pixels present in vessel area i.e. TPR=TP/TP+FN

  2. TNR (True Negative Ratio) is the ratio of number of pixels correctly identified as non-vessel pixel to the number of pixels present in the non-vessel area i.e. TNR=TN/TN+FP.

  3. FPR (False Positive Ratio) is the ratio of pixels erroneously identified as vessel pixels to the number of pixels present in the non-vessel area i.e. FPR=FP/TN+FP=1-TNR.

  4. FNR (False Negative Ratio) is the ratio of pixels erroneously identified as non-vessel pixels to the number of pixels present in the vessel area i.e. FNR=FN/TP+FN=1-TPR.

  5. Accuracy (ACC) is the ratio of total number of true events (TP+TN) to the total population (total number of pixels in the image) i.e. ACC=TP+TN/Ntotal.

  6. Sensitivity (SN), which a different name for TPR, is a measure of the ability of the segmentation process to detect the vessel pixels and is defined as SN=TPR=TP/TP+FN=1-FNR. The larger SN the better the identification of vessel pixels.

  7. Specificity (SP), which a different name for TNR, is a measure of the ability of the segmentation algorithm to detect background or non-vessel pixels and is defined as SP=TNR=TN/TN+FP=1-FPR. The larger SP the better the identification of non-vessel pixels.

  8. The area under the Receiver Operating Characteristic (ROC) curve, or Area Under Curve (AUC), which is the plot of (1-SP) versus SN. The performance of the system is better if the curve approaches closer to the top left corner and equals 1 for optimal systems.

These metrics quantify the precision in the correct classification of the segmented pixels as part of the vessels or the background. Detailed comparisons between the performances of various retinal segmentation techniques, performing on databases available in the literature, especially DRIVE and STARE which are currently the most common datasets for the evaluation of retinal vessel segmentation methods, can be found in Refs. (Fraz et al., 2012; Garg & Gupta, 2016; Vostatek et al., 2017).

In Fig. 6 we summarize the standard performance metrics for the case where the automatically extracted mask is compared to the manually labeled mask 2, and for the case where the two manually labeled masks are compared to each other. The average value and the standard deviation are calculated from the 20 images tested. Both the table corresponding to the values shown in Fig. 6 (Table 1) and an extended table showing all the values obtained can be found in Tables 3-6 in the Appendix, at the end of this document.

Fig. 6 Graphical representation of the average value and the standard deviation for the standard metrics 

From Fig. 6 it can be seen that the accuracy of generic algorithm is slightly lower than that from the manually labeled masks but, more importantly, the accuracy of manually labeled images does not reach 100% due to differences in the manual labeling. In any case, see Table 1 in the Appendix, it is the relative 2.3% error with respect to what can be manually labeled what can be taken as the accuracy of our approach.

Up to here, we have presented the results obtained by using standard metrics alone. In doing so, we noticed that vessels annotated by different observers may vary both in thickness and in location thus resulting in manually-labelled images that can be significantly different, as seen in Fig. 5 and Fig 6. These limitations impose the need for automatic, self-contained metrics that can reduces, or even eliminate, the human biases possibly involved.

In the following, we use the FD as an auxiliary self-contained metric that can help to connect the standard metrics of the algorithm, which are calculated blindly from the outcomes of the segmentation algorithm, to the actual structural properties of the image. It is well known that fractal geometry is an effective tool for the characterization of irregular shapes and that the FD can be a good descriptor of their complexity. In our case, since we are working with two-dimensional objects, i.e. planar images, the FD can lay between 1 and 2.

Some properties of fractals have been used in medical image analysis, for example for texture analysis (Chen, Daponte, & Fox, 1989; Lopes & Betrouni, 2009). In the case of the analysis of human retinal vessels, it has been reported that a healthy eye has an FD of around 1.7 (Family, Masters, & Platt, 1989; Mainster, 1990; Popovic et al., 2018). However, it has also been shown that, due to its underlying dependence on the structural properties of the image, the FD is sensitive to a number of other factors from both biological origin e.g., age, cataracts, and lens opacity (Cheung et al., 2012), changes in blood pressure due different origins (Sng et al., 2010; Zhu et al., 2014), existing diabetic condition (Aliahmad et al., 2014), and cognitive dysfunctions (Taylor et al., 2015), as well as numerical origin e.g., size and location of the region of interest (Aliahmad et al, 2014; Huang et al., 2015) or the specific stages in the pre-processing procedure (Che Azemin et al., 2016). Due to this overall ‘instability’ of the FD, which can be significant in some cases, one should not rely a quantitative analysis for diagnosis purposes solely on the value of the FD (Huang et al., 2015). In our approach, we do not attempt to provide a diagnosis using the FD, but to suggest its use as a feedback to the segmentation algorithm through its correlation with the standard metrics (see Fig 7).

Fig. 7 Flow diagram of the proposed fractal dimension-based correlation analysis. The upper dashed rectangle indicates the image segmentation (generic, intensity based algorithm in our case) while the bottom one indicates the proposed correlation-based analysis. Potentially, the outcome of this analysis can be used to calibrate the parameters involved in the segmentation to optimize the algorithm’s performance, as indicated by the dotted arrows 

Following the so-called box counting method which is a common approach for this estimation, the FD can be approximated by the number of boxes needed to cover the object (N), and it typically increases slowly as we decrease the box size (r), then the FD is given by FD=limr0log[Nr]log(1/r) (Sarkar & Chaudhuri, 1994). Using this approach, the FD was calculated for the manually segmented masks 1 and 2, as well as for the automatically segmented mask. Additionally, besides applying it on the entire images, we extended this box-counting calculation to subregions of the original image and averaged the FD obtained from all the sub-images. This was done for 4 (2x2), 9 (3x3), 16 (4x4), and 25 (5x5) sub-images, respectively.

In Fig. 8, the FD is presented as a function of the number of subdivisions for the raw binary masks (Fig. 8(a)) and for the difference images (Fig. 8(b)). The values plotted correspond to the average obtained from the 20 images processed; the error bar represents the standard deviation. From Fig. 8(a), the FD obtained from our approach closely matches that from the manually labeled mask 2. In terms of the difference images (Fig. 8(b)), the subtraction of the manually labeled masks leads to a smaller FD due to the presence of more organized structures i.e., conversely, the subtraction of the automatically extracted mask from the manually labeled one has larger FD due to the random pattern that covers the image more uniformly. The complete data set from which these values were extracted can be found in the Appendix.

Fig. 8 Fractal dimension, as a function of the number of sub-images: of the raw binary masks (a), and of the difference images obtained from the bit-wise (pixel-based) subtraction (b) 

To quantitatively evaluate the relation between the FD and the standard metrics, a linear regression analysis was performed, and the Pearson’s correlation coefficient was calculated for the different data sets, e.g., the correlation between TNR and the FD using a different number of sub-images. Fig. 9 summarizes the correlation between all these datasets, the corresponding numerical values can be found in Table 2 in the Appendix. The algorithm is mildly sensitive to the identification of vessel pixels with respect to structure complexity, as indicated by the relatively low correlation between TPR and the FD, but its specificity i.e. identification of background, non-vessel pixels, strongly depends on the complexity of the structure. In other words, this means that the identification of vessel pixels is not critically compromised if the structure’s complexity increases, but the correct identification of background pixels will be more effective for simple structures.

Fig. 9 Pearson’s correlation between the standard metrics and fractal dimension as a function of the number of subimages analyzed 

To exemplify one of the correlations in Fig. 9, the linear regression analysis is shown in Fig. 10(a) for the case where the FD is correlated with the TNR, for a different number of sub-images, as indicated. Fig. 10(b) shows the slope calculated from the linear regression analysis as a function of the number of sub-images. In this example, this plot indicates that the correlation between the FD and the TNR becomes stronger with increasing number of sub-images, but it also shows that this correlation saturates after a certain amount of sub-divisions.

Fig. 10 a) Scatter plot showing the strong correlation, obtained from linear regression, between the fractal dimension of the difference images (automatic - manual 2) and the TNR standard metric. The strong correlation remains regardless of the number of sub-images. b) Dependence of the linear regression slope on the number of sub-images. The error bars indicate the standard error of the linear regression 

Discussion

A method for comparison of segmented retinal blood vessel images based on the FD was presented. As an example, the method was used to compare the vessel segmentation obtained by automatic segmentation against manually segmented images. Linear regression analysis, on the standard metrics and image subdivision, showed that standard metrics strongly depend on the image complexity regardless of the sub-regions in which the original image is divided. As a consequence, small relative errors are found when using only standard metrics as a comparison. The use of the FD as an auxiliary, self-contained metric, together with subdividing the images under study, help us to further identify the nature of the differences between segmentations methods. Thanks to the fact that the strong correlation between the standard metrics and the FD is preserved regardless of the size of the sub-images, the FD can be used as an automatic, self-contained feedback in an iterative segmentation algorithm, for instance, to optimize the size of the region of interest in order to minimize the dependence of the algorithm’s performance on the actual properties of the image, that can be roughly summarized as Improvement = max{Rxy [standard metrics,   FD(Image1-Image2 )]}, where Rxy is the correlation coefficient.

Finally, we highlight that our approach is compatible, and can be used in a complementary manner, with similarity assessment approaches which are based on other aspects the image’s structure such as connectivity, area, and the length of the segmented vessels (Vostatek et al., 2017), or the so-called skeleton maps (Fraz et al., 2012; Kirbas, C. Quek, 2004). This may lead to the development of calibration and optimization approaches that are based on a set of automatic, self-contained geometrical descriptors simultaneously, all of which are related to different aspects of the images structure. In clinical applications, such an approach could greatly improve the quality and accuracy of the outcome of a segmentation stage.

Acknowledgments

MAE and JRPM would like to thank Universidad de La Salle Bajío for the facilities provided.

References

Abràmoff, M. D., Garvin, M. K., & Sonka, M. (2010). Retinal Imaging and Image Analysis. IEEE Reviews in Biomedical Engineering, 3, 169-208. [ Links ]

Aliahmad, B., Kumar, D. K., Hao, H., Unnikrishnan, P., Che Azemin, M. Z., Kawasaki, R., & Mitchell, P. (2014). Zone Specific Fractal Dimension of Retinal Images as Predictor of Stroke Incidence. The Scientific World Journal, 2014, 1-7. [ Links ]

Aliahmad, B., Kumar, D. K., Sarossy, M. G., & Jain, R. (2014). Relationship between diabetes and grayscale fractal dimensions of retinal vasculature in the Indian population. BMC Ophthalmology, 14(1), 152. [ Links ]

Aquino, A., Gegúndez, M. E., Bravo, J. M., & Marín, D. (2010). A similarity function for global quality assessment of retinal vessel segmentations. World Academy of Science, Engineering and Technology, 43(7), 593-597. [ Links ]

Arredondo, A. (2018). Diabetes duration, HbA 1c , and cause-specific mortality in Mexico. The Lancet Diabetes & Endocrinology, 6(6), 429-431. [ Links ]

Cervantes-Castañeda, R. A., Menchaca-Díaz, R., Alfaro-Trujillo, B., Guerrero-Gutiérrez, M., & Chayet-Berdowsky, A. S. (2010). Deficient prevention and late treatment of diabetic retinopathy in Mexico. Gaceta Medica de Mexico, 150(6), 518-526. [ Links ]

Che Azemin, M. Z., Ab Hamid, F., Wang, J. J., Kawasaki, R., & Kumar, D. K. (2016). Box-Counting Fractal Dimension Algorithm Variations on Retina Images. In Lecture Notes in Electrical Engineering (Vol. 362, pp. 337-343). [ Links ]

Chen, C. C., Daponte, J. S., & Fox, M. D. (1989). Fractal feature analysis and classification in medical imaging. IEEE Transactions on Medical Imaging, 8(2), 133-142. [ Links ]

Cheung, C. Y., Thomas, G. N., Tay, W., Ikram, M. K., Hsu, W., Lee, M. L., Wong, T. Y. (2012). Retinal vascular fractal dimension and its relationship with cardiovascular and ocular risk factors. American Journal of Ophthalmology, 154(4), 663-674. [ Links ]

Family, F., Masters, B. R., & Platt, D. E. (1989). Fractal pattern formation in human retinal vessels. Physica D: Nonlinear Phenomena, 38(1-3), 98-103. [ Links ]

Fathi, A., & Naghsh-Nilchi, A. R. (2013). Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomedical Signal Processing and Control, 8(1), 71-80. [ Links ]

Felkel, P., Wegenkittl, R., & Kanitsar, A. (2001). Vessel tracking in peripheral CTA datasets-an overview. Proceedings - Spring Conference on Computer Graphics , SCCG 2001, 232-239. [ Links ]

Fong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., Klein, R. (2004). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. [ Links ]

Fraz, M. M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A. R., Owen, C. G., & Barman, S. A. (2012). Blood vessel segmentation methodologies in retinal images - A survey. Computer Methods and Programs in Biomedicine, 108(1), 407-433. [ Links ]

Garg, M., & Gupta, S. (2016). Retinal blood vessel segmentation algorithms: A comparative survey. International Journal of Bio-Science and Bio-Technology, 8(3). [ Links ]

Gegundez-Arias, M. E., Aquino, A., Bravo, J. M. J. M., Marin, D., Gegúndez-Arias, M. E., Aquino, A., Marín, D. (2012). A function for quality evaluation of retinal vessel segmentations. IEEE Transactions on Medical Imaging, 31(2), 231-239. [ Links ]

Gonzalez, R. C., & Woods, R. E. (2001). Image segmentation. In Digital Image Processing (Second Edi). [ Links ]

Huang, F., Zhang, J., Bekkers, E. J., Bart, M., Romeny, H., Dashtbozorg, B., & ter Haar Romeny, B. M. (2015). Stability Analysis of Fractal Dimension in Retinal Vasculature. Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, 1-8. Iowa City, IA: University of Iowa. [ Links ]

Instituto Nacional de Salud Pública. (2016). Encuesta Nacional de Salud y Nutrición de Medio Camino 2016 (ENSANUT MC 2016). In Resultados Nacionales. Cuernavaca, México: Instituto Nacional de Salud Pública. [ Links ]

Jelinek, H., & Cree, M. (2009). Automated Image Detection of Retinal Pathology (H. Jelinek & M. Cree, eds.). CRC Press. [ Links ]

King, H., & Rewers, M. (1993). Diabetes in adults is now a Third World problem. World Health Organization Ad Hoc Diabetes Reporting Group. Ethnicity & Disease, 3 Suppl. [ Links ]

Kirbas, C., Quek, F. (2004). A review of vessel extraction techniques and algorithms. Computing Surveys, 36(2), 81-121. [ Links ]

Kolar, R., Kubena, T., Cernosek, P., Budai, A., Hornegger, J., Gazarek, J., Odstrcilik, J. (2013). Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database. IET Image Processing, 7(4), 373-383. [ Links ]

Kuri, S. K. (2015). Automated Segmentation of Retinal Blood Vessels using Optimized Gabor Filter with Local Entropy Thresholding. International Journal of Computer Applications, 114(11), 37-42. [ Links ]

Lee, R., Wong, T. Y., & Sabanayagam, C. (2015). Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye and Vision, 2(1), 17. [ Links ]

Lopes, R., & Betrouni, N. (2009). Fractal and multifractal analysis: A review. Medical Image Analysis, 13(4), 634-649. [ Links ]

Mainster, M. A. (1990). The fractal properties of retinal vessels: Embryological and clinical implications. Eye, 4(1), 235-241. [ Links ]

Muangnak, N., Aimmanee, P., Makhanov, S., & Uyyanonvara, B. (2015). Vessel transform for automatic optic disk detection in retinal images. IET Image Processing, 9(9), 743-750. [ Links ]

Niemeijer, M., Staal, J., van Ginneken, B., Loog, M., & Abramoff, M. D. (2004). Comparative study of retinal vessel segmentation methods on a new publicly available database. In J. M. Fitzpatrick & M. Sonka (Eds.), Proc SPIE Med Imaging [San Diego] 2004 (Vol. 5370, p. 648). [ Links ]

O’Hara, M. (2004). Clinical Ophthalmology: A Systemic Approach. In American Journal of Ophthalmology (Fifth edit, Vol. 137). [ Links ]

Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355-368. [ Links ]

Popkin, B. M., Adair, L. S., & Ng, S. W. (2012). Global nutrition transition and the pandemic of obesity in developing countries. Nutrition Reviews, 70(1), 3-21. [ Links ]

Popovic, N., Radunovic, M., Badnjar, J., & Popovic, T. (2018). Fractal dimension and lacunarity analysis of retinal microvascular morphology in hypertension and diabetes. Microvascular Research, 118, 36-43. [ Links ]

Sarkar, N., & Chaudhuri, B. B. (1994). An efficient differential box-counting approach to compute fractal dimension of image. IEEE Transactions on Systems, Man, and Cybernetics, 24(1), 115-120. [ Links ]

Siddalingaswamy, P. C., & Prabhu, K. G. (2010). Automatic detection of multiple oriented blood vessels in retinal images. Journal of Biomedical Science and Engineering, 03(01), 101-107. [ Links ]

Sng, C. C. A. C. C. A., Sabanayagam, C., Lamoureux, E. L. E. L., Liu, E., Lim, S. C. S. C., Hamzah, H., Wong, T. Y. T. Y. (2010). Fractal analysis of the retinal vasculature and chronic kidney disease. Nephrology Dialysis Transplantation, 25(7), 2252-2258. [ Links ]

Staal, J., Abramoff, M. D., Niemeijer, M., Viergever, M. A., & van Ginneken, B. (2004). Ridge-Based Vessel Segmentation in Color Images of the Retina. IEEE Transactions on Medical Imaging, 23(4), 501-509. [ Links ]

Taylor, A. M., MacGillivray, T. J., Henderson, R. D., Ilzina, L., Dhillon, B., Starr, J. M., & Deary, I. J. (2015). Retinal Vascular Fractal Dimension, Childhood IQ, and Cognitive Ability in Old Age: The Lothian Birth Cohort Study 1936. PLOS ONE, 10(3), e0121119. [ Links ]

Vostatek, P., Claridge, E., Uusitalo, H., Hauta-Kasari, M., Fält, P., & Lensu, L. (2017). Performance comparison of publicly available retinal blood vessel segmentation methods. Computerized Medical Imaging and Graphics, 55, 2-12. [ Links ]

Yan, Z., Yang, X., & Cheng, K.-T. (2018). A Skeletal Similarity Metric for Quality Evaluation of Retinal Vessel Segmentation. IEEE Transactions on Medical Imaging, 37(4), 1045-1057. [ Links ]

Zhu, P., Huang, F., Lin, F., Li, Q., Yuan, Y., Gao, Z., & Chen, F. (2014). The Relationship of Retinal Vessel Diameters and Fractal Dimensions with Blood Pressure and Cardiovascular Risk Factors. PLoS ONE, 9(9), e106551. [ Links ]

Zuiderveld, K. (1994). Contrast Limited Adaptive Histogram Equalization. In Graphics Gems (pp. 474-485). Elsevier. [ Links ]

Appendixes

Table 1 Summary of the standard performance results 

Standard
Metrics
Automatically Extracted vs
Manually Labeled 2
Manually Labeled 1 vs
Manually Labeled 2
TPR 0.6591 ± 0.0450 0.8066 ± 0.0443
TNR 0.9612 ± 0.0096 0.9674 ± 0.0093
FPR 0.0388 ± 0.0096 0.0326 ± 0.0093
FNR 0.3409 ± 0.0450 0.1934 ± 0.0443
ACC 0.9239 ± 0.0072 0.9473 ± 0.0048

Table 2 Summary of the Pearson’s correlation coefficient between the standard performance metrics and the fractal dimension of the difference images 

1 4 9 16 25
TPR 0.5368 0.7016 0.6977 0.6157 0.5707
TNR -0.8620 -0.9145 -0.9246 -0.9356 -0.9297
FPR 0.8620 0.9145 0.9246 0.9356 0.9297
FNR -0.5368 -0.7016 -0.6977 -0.6157 -0.5707
ACC -0.2061 -0.1627 -0.1819 -0.2664 -0.3070

Table 3 An extended table of the standard performance metrics obtained for the case where the automatically extracted mask (generic algorithm) is compared to the manually labeled mask 2 

Image # Automatically extracted vs. Manually-labeled mask 2
TPR TNR FPR FNR ACC SN SP
1 0.7122 0.9674 0.0326 0.2878 0.9346 0.7122 0.9674
2 0.7019 0.9683 0.0317 0.2981 0.9290 0.7019 0.9683
3 0.6310 0.9695 0.0305 0.3690 0.9254 0.6310 0.9695
4 0.6687 0.9598 0.0402 0.3313 0.9228 0.6687 0.9598
5 0.6574 0.9698 0.0302 0.3426 0.9330 0.6574 0.9698
6 0.5835 0.9699 0.0301 0.4165 0.9177 0.5835 0.9699
7 0.7184 0.9432 0.0568 0.2816 0.9199 0.7184 0.9432
8 0.6489 0.9634 0.0366 0.3511 0.9327 0.6489 0.9634
9 0.5778 0.9682 0.0318 0.4222 0.9225 0.5778 0.9682
10 0.6821 0.9641 0.0359 0.3179 0.9347 0.6821 0.9641
11 0.6928 0.9425 0.0575 0.3072 0.9124 0.6928 0.9425
12 0.6673 0.9565 0.0435 0.3327 0.9229 0.6673 0.9565
13 0.6122 0.9661 0.0339 0.3878 0.9142 0.6122 0.9661
14 0.7185 0.9530 0.0470 0.2815 0.9274 0.7185 0.9530
15 0.7008 0.9410 0.0590 0.2992 0.9150 0.7008 0.9410
16 0.6752 0.9582 0.0418 0.3248 0.9233 0.6752 0.9582
17 0.6774 0.9642 0.0358 0.3226 0.9332 0.6774 0.9642
18 0.6413 0.9596 0.0404 0.3587 0.9172 0.6413 0.9596
19 0.6355 0.9704 0.0296 0.3645 0.9221 0.6355 0.9704
20 0.5792 0.9695 0.0305 0.4208 0.9171 0.5792 0.9695
Avg 0.6591 0.9612 0.0388 0.3409 0.9239 0.6591 0.9612
Std 0.0450 0.0096 0.0096 0.0450 0.0072 0.0450 0.0096

Table 4 An extended table of the standard performance metrics obtained for the case where the two manually labeled masks are compared to each other 

Image # Manually-labeled mask 1 vs. Manually-labeled mask 2
TPR TNR FPR FNR ACC SN SP
1 0.8122 0.9694 0.0306 0.1878 0.9492 0.8122 0.9694
2 0.8359 0.9690 0.0310 0.1641 0.9494 0.8359 0.9690
3 0.8317 0.9569 0.0431 0.1683 0.9406 0.8317 0.9569
4 0.8223 0.9669 0.0331 0.1777 0.9485 0.8223 0.9669
5 0.8499 0.9597 0.0403 0.1501 0.9467 0.8499 0.9597
6 0.7872 0.9598 0.0402 0.2128 0.9365 0.7872 0.9598
7 0.8745 0.9535 0.0465 0.1255 0.9453 0.8745 0.9535
8 0.8504 0.9527 0.0473 0.1496 0.9427 0.8504 0.9527
9 0.7712 0.9692 0.0308 0.2288 0.9461 0.7712 0.9692
10 0.8225 0.9623 0.0377 0.1775 0.9477 0.8225 0.9623
11 0.8170 0.9645 0.0355 0.1830 0.9468 0.8170 0.9645
12 0.8286 0.9675 0.0325 0.1714 0.9513 0.8286 0.9675
13 0.7762 0.9674 0.0326 0.2238 0.9393 0.7762 0.9674
14 0.8334 0.9697 0.0303 0.1666 0.9549 0.8334 0.9697
15 0.7676 0.9767 0.0233 0.2324 0.9541 0.7676 0.9767
16 0.8261 0.9670 0.0330 0.1739 0.9496 0.8261 0.9670
17 0.8369 0.9631 0.0369 0.1631 0.9495 0.8369 0.9631
18 0.7400 0.9812 0.0188 0.2600 0.9491 0.7400 0.9812
19 0.7576 0.9868 0.0132 0.2424 0.9538 0.7576 0.9868
20 0.6908 0.9840 0.0160 0.3092 0.9446 0.6908 0.9840
Avg 0.8066 0.9674 0.0326 0.1934 0.9473 0.8066 0.9674
Std 0.0443 0.0093 0.0093 0.0443 0.0048 0.0443 0.0093

Table 5 An extended table of the fractal dimension, for a different number of image sub-divisions, of the difference image obtained from the bitwise (pixel-based) subtraction between the automatically extracted (generic algorithm) and the manually labeled mask 2 

Image # Fractal Dimension
1 4 9 16 25
1 1.8431 1.6719 1.4667 1.3223 1.0948
2 1.8430 1.6568 1.4614 1.3182 1.1513
3 1.8457 1.6709 1.4553 1.3140 1.0820
4 1.8399 1.6539 1.4525 1.3473 1.1617
5 1.8552 1.6201 1.4028 1.2940 1.1144
6 1.8354 1.6113 1.3976 1.2881 1.1074
7 1.8913 1.7615 1.5950 1.4941 1.3133
8 1.8515 1.6857 1.5069 1.3615 1.1542
9 1.8631 1.6493 1.4431 1.3421 1.1637
10 1.8655 1.6715 1.4680 1.3463 1.1766
11 1.8899 1.7601 1.5743 1.4744 1.2780
12 1.8838 1.7230 1.5374 1.4293 1.2579
13 1.8470 1.6380 1.4308 1.3122 1.1106
14 1.8752 1.7327 1.5437 1.4345 1.2153
15 1.8855 1.7589 1.5908 1.4458 1.2932
16 1.8685 1.6938 1.5000 1.3783 1.1993
17 1.8561 1.6855 1.4620 1.3487 1.1749
18 1.8632 1.6924 1.4921 1.4011 1.2122
19 1.8278 1.6027 1.3900 1.2552 1.0868
20 1.8335 1.6067 1.3828 1.2673 1.0870
Avg 1.8582 1.6773 1.4777 1.3587 1.1717
STD 0.0194 0.0502 0.0646 0.0681 0.0719

Table 6 An extended table of the fractal dimension, for a different number of image sub-divisions, of the difference image obtained from the bitwise (pixel-based) subtraction between the manually labeled mask 1 and the manually labeled mask 2 

Image # Fractal Dimension
1 4 9 16 25
1 1.7869 1.5952 1.4720 1.2916 1.0671
2 1.7932 1.6023 1.4539 1.2790 1.1228
3 1.7879 1.6339 1.4906 1.4029 1.1700
4 1.7872 1.6078 1.4678 1.3693 1.2386
5 1.8065 1.6563 1.5171 1.3917 1.1986
6 1.7962 1.6517 1.5231 1.3969 1.2370
7 1.7961 1.6528 1.5338 1.4322 1.2488
8 1.7807 1.6151 1.5308 1.4038 1.2401
9 1.7767 1.5954 1.4896 1.3162 1.1643
10 1.7838 1.6262 1.5044 1.3964 1.2468
11 1.8236 1.6612 1.5267 1.4127 1.2264
12 1.7692 1.6290 1.4477 1.2887 1.1532
13 1.7970 1.6407 1.4600 1.2992 1.1532
14 1.7737 1.6019 1.4376 1.3504 1.1035
15 1.7202 1.4898 1.3330 1.2373 1.0192
16 1.7755 1.5769 1.4093 1.3028 1.0996
17 1.7710 1.5763 1.4389 1.3222 1.1087
18 1.6972 1.4090 1.2258 1.1480 0.9547
19 1.7094 1.4340 1.2119 1.0110 0.9217
20 1.6855 1.4280 1.2983 1.1301 0.9720
Avg 1.7709 1.5842 1.4386 1.3091 1.1323
STD 0.0375 0.0791 0.0972 0.1088 0.1022

Received: February 26, 2019; Accepted: April 04, 2019

* Autor para correspondencia: Rafael Guzmán Cabrera, E-mail: guzmanc@ugto.mx

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License