SciELO - Scientific Electronic Library Online

 
vol.23 issue2A Computational Tool for Detection and Parameters Estimation of Microbubbles Generated with Lasers in Optical FibersA Flexible Framework for Real-Time Thermal-Aware Schedulers using Timed Continuous Petri Nets author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Comp. y Sist. vol.23 n.2 Ciudad de México Apr./Jun. 2019  Epub Mar 10, 2021

https://doi.org/10.13053/cys-23-2-3202 

Articles of the thematic issue

Dark Channel Applied for Reduction of the Effects of Non-uniform Illumination in Image Binarization

Sebastián Salazar Colores1  2  * 

Mariano Garduño Aparicio3 

Eduardo Ulises Moya Sánchez2  4 

Claudia Victoria Lopez Torres1 

Juan Manuel Ramos Arreguín3 

1 Universidad Autónoma de Querétaro, Facultad de Informática, México. s.salazarcolores@gmail.com, azul.cielo.2007@gmail.com

2 Barcelona Supercomputing Center, High-Performance Computing Artificial Intelligence, España. dr.ulisesmoya@gmail.com

3 Universidad Autónoma de Querétaro, Facultad de Ingeniería, México. magaap@yahoo.com.mx, jramos@mecamex.net

4 Universidad Autónoma de Guadalajara, México


Abstract:

Non-uniform illumination is a common issue in images acquired in uncontrolled environments. Elimination or reduction of the non-uniform illumination problem is required in order to get an accurate image binarization. This paper introduces the combination of the dark channel and the atmospheric scattering model along with the k-means segmentation to reduce the effects of non-uniform illumination conditions in image binarization. The results show the effectiveness and robustness of this approach.

Keywords: Non-uniform illumination; uneven illumination; dark channel; binarization

1 Introduction

Uneven illumination is a common problem in uncontrolled environments, which affects the performance of computer vision systems that use the acquired images. Non-uniform illumination affects operations in digital image processing like classification [13], segmentation [19], pattern recognition [9]. In order to observe the effects of the proposed approach, in this paper the proposed method is applied to binarization. Binarization is the process for converting from pixel image to a binary image [3] and it is the simplest image segmentation case [17]. The binary image is a kind of image that just has two values; one of them are the pixels of foreground image forming the objects of interest and the rest as background pixels [20].

Building robust algorithms, which can realize effective binarization of an image, regardless the lighting conditions, it is not a trivial task, but final results are quite important, because there are a lot of applications that need to do image processing through binary images, like recognition of writing in documents [16], fingerprints recognition [2], analysis of brain MR images [11], recognition of license plate [11], defect detection in production lines [15], etc.

The threshold methods are divided into six main classes, which are: the analysis of the shape of the histogram [25], which analyzes the peaks, valleys and curvatures of the smoothed histogram, grouping methods, where the gray level samples are grouped as background and as the first plane, entropy-based methods [23] that result in algorithms which use the entropy of both the foreground and background regions, methods based on object attributes [24], which look for a measure of similarity between the gray level and the binarized images, as well as the similarity of diffuse forms, the coincidence of edges, among others, spatial methods [7], which use a higher order probability distribution and/or pixel correlation, finally, local methods [12], which adapt the value of threshold of each pixel to the characteristics of the local image, another approach are region growing [8] or clustering based algorithms [1].

Additionally, to the previously presented approaches in the literature, other color models are used commonly to binarize and segment the images, where the non-uniform illumination occurs, they separate out illumination in an independent channel such as like the HSV [14].

This paper presents an alternative approach, which is applied in the binarization of a blue base (background) with three objects over it (two bright grays and one green opaque). The approach uses the dark channel and the scattering model to improve the output. Both techniques are widely used in eliminating the effects of acquired images under weather conditions like haze, smog or rain; processes also named as dehazing algorithms [4].

In this context, it is applied to eliminate the contribution of light and to improve the color for the image's pixels. Posteriorly, a cluster segmentation is made by using the K-means algorithm to reduce possible values of the image. Finally, the pixels where the channel B is bigger than R and G are separated from the others, forming and highlighting borders of the final shapes. The effectiveness of the proposed algorithm is demonstrated by presenting five different tests, and a comparative study. The results show that the quality and effectiveness of the algorithm are superior compared to other approaches.

The rest of this paper is organized as follows: In Section 2, an overview of concepts is presented, such as the dark channel, the scattering model, and k-means, Section 3 includes the details of the research process. The obtained results are presented in Section 4. A discussion regarding the performance of the proposed algorithm is presented in Section 5. The final concepts as part of the conclusion are given in Section 6.

2 Background

2.1 The Dark Channel Prior

The dark channel prior proposed in [5] is an observation of haze-free images when they are acquired from outdoor environments: in most local regions of an image (not representing the sky), at least one color channel (R, G or B) (called dark channel), has a very low intensity in some pixels. In other words, the minimum intensity in such local regions of pixels has a very low value (close to 0). For an image I(x), the dark channel is defined such as:

Idarkx=mincR,G,BminzΩxIczAc, (1)

where, Ω(x) corresponds to the patch centered in x, the color channels (R, G, or B) of I are represented by Ic , finally z are the pixels contained in Ω(x). The dark prior channel is defined as:

Idarkx0. (2)

2.2 The Scattering Model

The scattering model is shown in the Equation (3):

Ix=Jxtx+A1-tx,(3)

where I(x) corresponds to the intensity noted in each one of the three channels R, G or B of the pixel x. J(x) is related to the vector intensity (R, G, B) of the original area of the scene in the real world, which is represented by the pixel x. A corresponds to the color vector of the global atmospheric light. t(x) is named the transmission, and describes the portion of light, which it is not scattered or absorbed, and reaches the camera.

2.3 The K-means Algorithm

The K-means algorithm is an unsupervised grouping algorithm, which classifies input data points into multiple classes, depending on the inherent distance between them. This algorithm infers that the characteristics of the data form a vector space and tries to find a natural grouping between them.

The points are grouped around the centroids. μii = 1…k and these are obtained by minimizing the following objective function (Equation (4)):

V=i=1kxjSixj-μi2, (4)

where there are k clusters Si ,i = 1, 2, ...,k and μi are the centroid or mean point of all the points xjSi . A part of this project implemented an iterative version of the algorithm. Said algorithm takes as input a two-dimensional image. The algorithm is shown:

  • 1. Calculate the intensity distribution (also called histogram) of the intensities.

  • 2. Initialize the centroids with k random intensities.

  • 3. Repeat the following steps until the cluster labels of the image does not change anymore.

  • 4. Cluster the points based on the distance of their intensities from the centroid intensities (Equation (4)):

    ciargminjxi-μj2. (5)

  • 5. Compute the new centroid for each of the clusters (Equation (5)):

    μi=1m1ci=jxii=1m1ci=j. (6)

i iterates over the all the intensities, j iterates over all the centroids and μi are the centroid intensities [3].

3 Proposed Method

The proposed method is based on the next fact: once the image is normalized to the color vector RGB of the estimated atmospheric light, the contribution of the light is the RGB vector [1 1 1], hence, the contribution of the light in the image has a direct relation to the dark channel, low values means low contribution and high values produce high contribution to the light in the image, this is shown in Fig. 1.

Fig. 1 The relation between the illumination and the dark channel in a normalized image 

Then, when the scattering model and the dark channel are applied, the variation of light tend to be compensated.

The study case is shown in Fig. 2(a) where the principal objective is to binarize the image of two metallic pieces and a reference square, the results could be used for industrial applications, in which metallic pieces are involved, for example, shape detection, production quality, products size, etc.

Fig. 2 The study case a) Input image b) results of a naive binarization 

With this, the image should be grouped in the background (color blue) and the foreground (two gray regions and one green).

In Fig. 2(b) results from the simplest algorithm are shown, by applying directly the Equation (8), is possible to see the common problems in the task of separating the regions in the image, the light has the majority role and causes mistakes in shapes detection due to its influence in brightness and colors.

The proposed algorithm is described in the flowchart shown in the Fig. 3 and examples of its execution are shown in Fig. 4.

Fig. 3 The proposed algorithm 

Fig. 4 An example of the different stages of the proposed algorithm. (a) Input image, (b) normalized image, (c) dark channel image, (d) image result of applying the scattering model, (e) image result of k-means, (f) image binary result 

Moreover, the proposed algorithm can be described as:

  • 1. The algorithm has the input the image I (Fig. 4(a)).

  • 2. The Atmospheric Light A which is an RGB vector with the information of the source light is estimated as in [6]:

    A=maxc=13ICargmaxx0.1%*h*wIdarkx, (7)

    where h and ω are the height and width of I, respectively.

  • 3. The image is normalized according to the Atmospheric Light A (Fig. 4(b)).

  • 4. The Dark channel of the normalized image I is calculated (Fig. 4(c)).

  • 5. The scattering model is applied to obtain an image without the contribution of the Atmospheric Light A component (Fig. 4(d)).

  • 6. The image is segmented with k-means to reduce the possible values of the pixels in the image (Fig. 4(e)).

  • 7. In order to separate the blue background from foreground, and obtain the binary image; the pixels are divided under the next criteria: when the channel blue has more intensity than other channels the pixels are considered background. Otherwise are labeled as foreground (Fig. 4(f)):

    B=0if IxBIxRandIxBIxG1otherwise (8)

4 Results

In order to evaluate the performance of binarization of the proposed algorithm, was realized a test on five different images acquired under different non-uniform lightning conditions. The test images contain the photograph of two aluminum plaques and a green square over a blue base.

The tests were implemented in Matlab 2016 on the computer with a 3.1GHz processor, 4 cores, and 4GB of RAM. The execution time required for the tested images with a resolution of 1944 x 2592 pixels was 65 seconds approximately. Fig. 4 shows an example of the application of the proposed algorithm.

In order to perform the comparison study, we consider the following four algorithms: 1) the Entropy thresholding presented in [10] (see Fig. 5(b)); 2) the Otsu method, which selects the threshold based on the minimization of the within-group variance of the two groups of pixels separated by thresholding operator [22] (see Fig. 5(c)); 3) adaptive threshold method [18] (see Fig. 5(d)); and 4) the algorithm based on the color model HSV [21] (see Fig. 5(e)). The results of the test are shown in the Fig. 5, differences between the outputs from the mentioned algorithms could be compared, changes in light conditions vary those results, and various mistakes in the other algorithms could be observed.

Fig. 5 Comparison of binaries images results of different algorithms. (a) Input image, (b) Entropy thresholding, (c) Otsu algorithm, (d) Adaptive thresholding, (e) HSV binarization, (f) Proposed algorithm binarization 

5 Discussion

From Fig. 5 it can be observed that the algorithms based on entropy (Fig. 5(b)) and Otsu (Fig. 5(c)) are deeply corrupted by the variant illumination resulting in a poor performance.

The adaptive thresholding algorithm shown in Fig. 5(d) shows a satisfactory performance while calculating the edges but results in a big artifacts in the background and foreground. The algorithm based on HSV thresholding, shown in Fig. 5(e), seems to be more stable. However, it is possible to observe little artifacts regions (marked in red), specifically in the rows 1 and 5, where the central squares are overlapping. On the other hand, by observing the results (Fig. 5(f)) obtained using the proposed algorithm, it can be concluded that the algorithm performs an adequate binarization, without any artifacts and false regions.

6 Conclusion and Future Work

This paper proposes a methodology for enhancing the performance of image binarization by using the dark channel, scattering model, and k-means techniques. The results presented in this paper show the effectiveness and robustness of the proposed approach under different luminous conditions.

The proposed algorithm permitting a robust future computer vision system; in the future is proposed a deeper analysis of the quality of the proposed algorithm; it could be implemented in a recognition system for metallic pieces or industrial processes that involve this kind of image processing.

Finally, after many proposed experiments, where the illumination conditions were changed, the positions of metallic pieces and different times for sun light, it can be observed that the proposed algorithm has a better performance than another algorithm in state of the art.

The results show that the proposed algorithm can be used in real problems in which the illumination should not necessarily be continuous, or natural environments under dynamic light conditions.

Acknowledgements

Sebastian Salazar-Colores (CVU 477758) would like to thank CONACYT (Consejo Nacional de Ciencia y Tecnología) for the financial support of his PhD studies under Scholarship 285651.

References

1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., & Süsstrunk, S. (2012). SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence. [ Links ]

2. Bayram, S., Sencar, H. T., & Memon, N. (2012). Efficient sensor fingerprint matching through fingerprint binarization. IEEE Transactions on Information Forensics and Security, Vol. 7, No. 4, pp. 1404-1413. [ Links ]

3. Gonzalez, R. C. & Woods, R. E. (2006). Digital Image Processing (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA. [ Links ]

4. He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 12, pp. 2341-2353. [ Links ]

5. He, K. , Sun, J. , & Tang, X. (2011). Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, Vol. 33, No. 12, pp. 2341-2353. [ Links ]

6. He, R., Wang, Z., Xiong, H., & Feng, D. D. (2012). Single image dehazing with white balance correction and image decomposition. International Conference on Digital Image Computing Techniques and Applications (DICTA), volume 1, Fremantle, Australia, pp. 1-7. [ Links ]

7. Jain, A. K. & Farrokhnia, F. (1991). Unsupervised texture segmentation using Gabor filters. Pattern Recognition. [ Links ]

8. Kamdi, S. & Krishna, R. K. (2012). Image Segmentation and Region Growing Algorithm. International Journal of Computer Technology and Electronics Engineering. [ Links ]

9. Kao, W. C., Hsu, M. C., & Yang, Y. Y. (2010). Local contrast enhancement and adaptive feature extraction for illumination-invariant face recognition. Pattern Recognition. [ Links ]

10. Kapur, J. N., Sahoo, P. K., & Wong, A. K. C. (1985). A new method for gray-level picture thresholding using the entropy of the histogram. Computer Vision, Graphics, and Image Processing, Vol. 29, No. 3, pp. 273-285. [ Links ]

11. Lee, C., Huh, S., Ketter, T. A., & Unser, M. (1998). Unsupervised connectivity-based thresholding segmentation of midsagittal brain MR images. Computers in biology and medicine, Vol. 28, No. 3, pp. 309-338. [ Links ]

12. Li, C., Huang, R., Ding, Z., Gatenby, J. C., Metaxas, D. N., & Gore, J. C. (2011). A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Transactions on Image Processing. [ Links ]

13. Lu, Y., Xie, F., Wu, Y., Jiang, Z., & Meng, R. (2015). No reference uneven illumination assessment for dermoscopy images. IEEE Signal Processing Letters. [ Links ]

14. Martinkauppi, J. B., Soriano, M. N., & Laaksonen, M. V. (2001). Behavior of skin color under varying illumination seen by different cameras at different color spaces. [ Links ]

15. Ng, H.-F. (2006). Automatic thresholding for defect detection. Pattern Recognition Letters, Vol. 27, No. 14, pp. 1644-1649. [ Links ]

16. Sauvola, J. & Pietikäinen, M. (2000). Adaptive document image binarization. Pattern recognition, Vol. 33, No. 2, pp. 225-236. [ Links ]

17. Sezgin, M., Others, & Sankur, B. (2004). Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic Imaging, Vol. 13, No. 1, pp. 146-168. [ Links ]

18. Shafait, F., Keysers, D., & Breuel, T. M. (2008). Efficient implementation of local adaptive thresholding techniques using integral images. [ Links ]

19. Shaik, K. B., Ganesan, P., Kalist, V., Sathish, B. S., & Jenitha, J. M. M. (2015). Comparative Study of Skin Color Detection and Segmentation in HSV and YCbCr Color Space. Procedia Computer Science. [ Links ]

20. Shapiro, L. G. & Stockman, G. C. (2005). Computer Vision. Prentice Hall. [ Links ]

21. Sural, S., Qian, G., & Pramanik, S. (2002). Segmentation and histogram generation using the HSV color space for image retrieval. Image Processing. 2002. Proceedings. 2002 International Conference on, volume 2, pp. II-589-II-592 vol.2. [ Links ]

22. Vala, M. H. J. & Baxi, A. (2013). A review on Otsu image segmentation algorithm. InternationalJournal of Advanced Research in Computer Engineering & Technology, Vol. 2, No. 2, pp. 387-389. [ Links ]

23. Yan, C., Sang, N., & Zhang, T. (2003). Local entropy-based transition region extraction and thresholding. Pattern Recognition Letters, Vol. 24, No. 16, pp. 2935-2941. [ Links ]

24. Zheng, S., Cheng, M. M., Warrell, J., Sturgess, P., Vineet, V., Rother, C., & Torr, P. H. (2014). Dense semantic image segmentation with objects and attributes. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. [ Links ]

25. Zhu, N., Wang, G., Yang, G., & Dai, W. (2009). A fast 2D Otsu thresholding algorithm based on improved histogram. Pattern Recognition, 2009. CCPR 2009. Chinese Conference on, volume 1, Nanjing, China, pp. 1-5. [ Links ]

Received: October 25, 2018; Accepted: February 11, 2019

* Corresponding author is Sebastian Salazar Colores. s.salazarcolores@gmail.com

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License