<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0016-7169</journal-id>
<journal-title><![CDATA[Geofísica internacional]]></journal-title>
<abbrev-journal-title><![CDATA[Geofís. Intl]]></abbrev-journal-title>
<issn>0016-7169</issn>
<publisher>
<publisher-name><![CDATA[Universidad Nacional Autónoma de México, Instituto de Geofísica]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0016-71692014000300005</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Edge enhancement in multispectral satellite images by means of vector operators]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Lira]]></surname>
<given-names><![CDATA[Jorge]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Rodríguez]]></surname>
<given-names><![CDATA[Alejandro]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Universidad Nacional Autónoma de México Instituto de Geofísica ]]></institution>
<addr-line><![CDATA[México Distrito Federal]]></addr-line>
<country>México</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>09</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>09</month>
<year>2014</year>
</pub-date>
<volume>53</volume>
<numero>3</numero>
<fpage>289</fpage>
<lpage>308</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_arttext&amp;pid=S0016-71692014000300005&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_abstract&amp;pid=S0016-71692014000300005&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.mx/scielo.php?script=sci_pdf&amp;pid=S0016-71692014000300005&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[El realce de bordes es un elemento de análisis para entender la estructura espacial de imágenes de satélite. Se presentan dos métodos para extraer los bordes de imágenes multiespectrales de satélite. Una imagen multiespectral se modela como un campo vectorial de un número de dimensiones igual al número de bandas en la imagen. En este modelo, un pixel se define como un vector formado por un número d elementos igual al número de bandas. Se aplican dos operadores vectoriales a tal campo vectorial. En nuestro primer método, extendemos la definición de gradiente. En esta extensión, se obtiene el vector diferencia del pixel central de una ventana con los pixels vecinos. Se genera entonces una imagen multiespectral donde cada pixel representa el máximo cambio en la respuesta espectral en la imagen en cualquier dirección. A esta imagen se le denomina el gradiente multiespectral. El otro método considera la generalización del Laplaciano por medio de la transformada de Fourier h-dimensional. A esta imagen se le denomina el Laplaciano multiespectral. Los operadores vectoriales realizan una extracción simultánea del contenido de bordes en las bandas espectrales de la imagen multiespectral. Nuestros métodos son libres de parámetros. Nuestros métodos trabajan para una imagen multiespectral de cualquier número de bandas. Se discuten dos ejemplos que involucran imágenes multiespectrales de satélite a dos escalas. Comparamos nuestros resultados con procedimientos de realces de bordes ampliamente empleados. La evaluación de los resultados muestra un mejor comportamiento de los métodos propuestos comparados con los operadores de bordes ampliamente usados.]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[Edge enhancement is an element of analysis to derive the spatial structure of satellite images. Two methods to extract edges from multispectral satellite images are presented. A multispectral image is modeled as a vector field with a number of dimensions equal to the number of bands in the image. In this model, a pixel is defined as a vector formed by a number of elements equal to the number of bands. Two vector operators are applied to such vector field. In our first method, we extend the definition of the gradient. In this extension, the vector difference of the window central pixel with neighboring pixels is obtained. A multispectral image is then generated where each pixel represents the maximum change in spectral response in the image in any direction. This image is named a multispectral gradient. The other method, considers the generalization of the Laplacian by means of an &#951;-dimensional Fourier transform. This image is named a multispectral Laplacian. The vector operators perform a simultaneous extraction of edge-content in the spectral bands of a multispectral image. Our methods are parameter-free. Our methods work for a multispectral image of any number of bands. Two examples are discussed that involve multispectral satellite images at two scales. We compare our results with widely used edge enhancement procedures. The evaluation of results shows better performance of proposed methods when compared to widely used edge operators.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[detección de bordes]]></kwd>
<kwd lng="es"><![CDATA[imagen multiespectral]]></kwd>
<kwd lng="es"><![CDATA[realce de borde]]></kwd>
<kwd lng="es"><![CDATA[operador vectorial]]></kwd>
<kwd lng="en"><![CDATA[edge detection]]></kwd>
<kwd lng="en"><![CDATA[multispectral image]]></kwd>
<kwd lng="en"><![CDATA[edge enhancement]]></kwd>
<kwd lng="en"><![CDATA[vector operator]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[  	    <p align="justify"><font face="verdana" size="4">Original paper</font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="4"><b>Edge enhancement in multispectral satellite images by means of vector operators</b></font></p>  	    <p align="center"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="center"><font face="verdana" size="2"><b>Jorge Lira* and Alejandro Rodr&iacute;guez</b></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><i>Instituto de Geof&iacute;sica,</i> <i>Universidad Nacional Aut&oacute;noma de M&eacute;xico Delegaci&oacute;n Coyoac&aacute;n, 04510 M&eacute;xico D.F., M&eacute;xico</i> *Corresponding author: <a href="mailto:jlira@geociencias.unam.mx">jlira@geociencias.unam.mx</a></font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2">Received: May 14, 2013.    ]]></body>
<body><![CDATA[<br> 	Accepted: December 02, 2013.    <br> 	Published on line: July 01, 2014.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Resumen</b></font></p>  	    <p align="justify"><font face="verdana" size="2">El realce de bordes es un elemento de an&aacute;lisis para entender la estructura espacial de im&aacute;genes de sat&eacute;lite. Se presentan dos m&eacute;todos para extraer los bordes de im&aacute;genes multiespectrales de sat&eacute;lite. Una imagen multiespectral se modela como un campo vectorial de un n&uacute;mero de dimensiones igual al n&uacute;mero de bandas en la imagen. En este modelo, un pixel se define como un vector formado por un n&uacute;mero d elementos igual al n&uacute;mero de bandas. Se aplican dos operadores vectoriales a tal campo vectorial. En nuestro primer m&eacute;todo, extendemos la definici&oacute;n de gradiente. En esta extensi&oacute;n, se obtiene el vector diferencia del pixel central de una ventana con los pixels vecinos. Se genera entonces una imagen multiespectral donde cada pixel representa el m&aacute;ximo cambio en la respuesta espectral en la imagen en cualquier direcci&oacute;n. A esta imagen se le denomina el gradiente multiespectral. El otro m&eacute;todo considera la generalizaci&oacute;n del Laplaciano por medio de la transformada de Fourier h&#45;dimensional. A esta imagen se le denomina el Laplaciano multiespectral. Los operadores vectoriales realizan una extracci&oacute;n simult&aacute;nea del contenido de bordes en las bandas espectrales de la imagen multiespectral. Nuestros m&eacute;todos son libres de par&aacute;metros. Nuestros m&eacute;todos trabajan para una imagen multiespectral de cualquier n&uacute;mero de bandas. Se discuten dos ejemplos que involucran im&aacute;genes multiespectrales de sat&eacute;lite a dos escalas. Comparamos nuestros resultados con procedimientos de realces de bordes ampliamente empleados. La evaluaci&oacute;n de los resultados muestra un mejor comportamiento de los m&eacute;todos propuestos comparados con los operadores de bordes ampliamente usados.</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Palabras clave:</b> detecci&oacute;n de bordes, imagen multiespectral, realce de borde, operador vectorial.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Abstract</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Edge enhancement is an element of analysis to derive the spatial structure of satellite images. Two methods to extract edges from multispectral satellite images are presented. A multispectral image is modeled as a vector field with a number of dimensions equal to the number of bands in the image. In this model, a pixel is defined as a vector formed by a number of elements equal to the number of bands. Two vector operators are applied to such vector field. In our first method, we extend the definition of the gradient. In this extension, the vector difference of the window central pixel with neighboring pixels is obtained. A multispectral image is then generated where each pixel represents the maximum change in spectral response in the image in any direction. This image is named a multispectral gradient. The other method, considers the generalization of the Laplacian by means of an &#951;&#45;dimensional Fourier transform. This image is named a multispectral Laplacian. The vector operators perform a simultaneous extraction of edge&#45;content in the spectral bands of a multispectral image. Our methods are parameter&#45;free. Our methods work for a multispectral image of any number of bands. Two examples are discussed that involve multispectral satellite images at two scales. We compare our results with widely used edge enhancement procedures. The evaluation of results shows better performance of proposed methods when compared to widely used edge operators.</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Key words:</b> edge detection, multispectral image, edge enhancement, vector operator.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Introduction</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Edge detection has been undertaken for gray&#45;level and color images using a number of methods and procedures. Most of the techniques published in the scientific literature in the last years deal with color images.</font></p>  	    <p align="justify"><font face="verdana" size="2">Well&#45;established methods such as Kirsch, Sobel, Gradient and Laplacian operators have been widely used to extract edges in gray&#45;level images (Pratt, 2001). Bowyer and co&#45;workers (2001) provided a detailed account of a number of edge operators in gray images. The reviewed operators carry a set of parameters that needs to be defined in terms of heuristic criteria. Ground&#45;truth images were used to derive a classification of edge operator performance (Bowyer <i>et al</i>., 2001). A deformable contour, defined by a wavelet snake, is designed to identify the boundary of pulmonary nodules in digital chest radiographs (Yoshida, 2003). In this work (Yoshida 2003), a multi&#45;scale edge representation is obtained by means of the wavelet transform; this produces, however, fragmented edge segments. Therefore, a wavelet snake was used to produce a smooth and closed contour of a pulmonary nodule.</font></p>  	    <p align="justify"><font face="verdana" size="2">Other methods to detect edges in gray&#45;level images use fuzzy logic. Segmentation of a fuzzy image into regions of similar image properties was achieved by means of a fuzzy procedure (Bigand <i>et al</i>., 2001). This method works with fuzzy&#45;like and noisy images. Zero crossings that correspond to gradient maxima were obtained by means of the cosine transform in noisy images (Sundaram, 2003). This scheme favors the detection of weak edges in background noise and suppresses false edges.</font></p>  	    <p align="justify"><font face="verdana" size="2">The modeling of natural RGB images as vector fields has been exploited to detect edges in color images (Koschan and Abidi, 2005; Evans and Liu, 2006). In their studies, the authors (Koschan and Abidi, 2005) provide an overview of color edge detection techniques, and, in particular, generalizations of Canny and Cumani operators to color spaces were discussed with examples. Evans and Liu (2006) provide a review of color edge detectors.</font></p>  	    <p align="justify"><font face="verdana" size="2">A parameter&#45;free approach could be obtained when an automatic determination threshold was calculated using a model&#45;based design (Fan <i>et al</i>., 2001). With this approach, a color&#45;image edge operator is derived. Cellular neural networks applied to color images resulted in a model to detect edges (Li <i>et al</i>., 2008). This model was successfully applied to RGB images with color test patterns. In addition to these results, the authors provided a detailed revision of color edge detection techniques.</font></p>  	    <p align="justify"><font face="verdana" size="2">Recent advances in edge enhancement for color images show clear advantages over methods for mono&#45;spectral images (Xu <i>et al</i>., 2010; Chen and Chen, 2010; Nezhadarya and Kreidieh, 2011; Gao <i>et al</i>., 2011; Chu <i>et al</i>., 2013). Color images are increasingly used in many applications such as surveillance, computer vision and robotics. Multispectral satellite images are available at several scales. For these two groups of images, edge enhancement is an element of structural analysis.</font></p>  	    <p align="justify"><font face="verdana" size="2">A general method is needed that works for any number of bands, with no parameters and a reasonable computing time. To fulfill such goal, we model a multispectral satellite image by means of a vector field. The dimension of this field equals the number of bands of the image. Upon this field, we may apply vector operators. We compare our results with those obtained from conventional edge operators (Pratt, 2001; Bowyer <i>et al</i>., 2001). We carry out a detailed evaluation of our results. Such evaluation includes qualitative and quantitative analysis. Our evaluation shows a clear improvement with respect to conventional edge operators.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><b>Study area and data</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Two multispectral satellite images were used to test the goodness of our method at different scales. Both images cover a portion of Mexico City where the runaways of an airport are clearly visible. One of the images is formed by the visible and near infrared (VNIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer sensor (ASTER) on board Terra satellite (<a href="/img/revistas/geoint/v53n3/a5f1.jpg" target="_blank">Figure 1</a>). The four bands of the IKONOS sensor (<a href="/img/revistas/geoint/v53n3/a5f2.jpg" target="_blank">Figure 2</a>) form the other image. <a href="#a5t1">Table 1</a> provides basic parameters of these images.</font></p>  	    <p align="center"><font face="verdana" size="2"><a name="a5t1"></a>    <br> 	<img src="/img/revistas/geoint/v53n3/a5t1.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">The high density of streets, avenues and buildings of the city results in a large number of edges per unit area. Such edges are of varying shape and size. Therefore, the multiple edges formed by streets, avenues, causeways and building blocks are a good test for our method.</font></p>  	    <p align="justify"><font face="verdana" size="2">These images are not precisely orthorectified since no implications on our method arise. However, rectification with first&#45;order polynomial equation was applied in order to relate pixel coordinates with geographic coordinates.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Methods</b></font></p>  	    <p align="justify"><font face="verdana" size="2">In a multispectral image, the information&#45;content of edges varies through the bands. In order to extract the information of edges from the multispectral image, we require a transformation applicable to the image as a whole.</font></p>  	    <p align="justify"><font face="verdana" size="2">In addition to the original bands, principal components analysis was performed on the two images. The first principal component of both images is used to apply widely used edge operators (Pratt, 2001; Bowyer <i>et al</i>., 2001). These operators are used for the sake of comparison with the methods developed in our work. The first principal component accumulates most of the variance of the images: 78.50% for the ASTER image, and 83.09% for the IKONOS image. Therefore, we applied widely used edge operators to the first principal component.</font></p>  	    ]]></body>
<body><![CDATA[<blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Vector field of a multispectral image</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">The modeling of an h&#45;dimensional multispectral image as a vector field will be addressed in section 3.1 (Lira and Rodr&iacute;guez, 2006). This field holds the same dimension as the original multispectral image. The field is composed by the set of pixels considered as h&#45;dimensional vectors.</font></p>  	    <p align="justify"><font face="verdana" size="2">In Section 3.2, we determined maximum difference vectors in a moving window that systematically scan the entire image. This maximum difference produces an h&#45;dimensional image where edges are enhanced.</font></p>  	    <p align="justify"><font face="verdana" size="2">In Section 3.3, we derived an h&#45;dimensional Laplacian using Fourier transform. To do so, we first consider the Fourier transform of second partial derivates of an image (Bracewell, 2003). With this result, we produced the Laplacian of an image. Finally, we generalized the Laplacian for multispectral images composed of h&#45;bands. A flow chart resumes our methods, from the modeling of a multispectral image as a vector field, to the enhancement of edges through the bands of the image (<a href="#f3">Figure 3</a>)</font></p> 	    <p align="center"><a name="f3"></a></p>      <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5f3.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">Let <i>L</i> &equiv; {1, . . . M} &bull; {1, . . . N} be a rectangular discrete lattice. This lattice is virtually overlaid on the scene. On each node of L, a resolution cell named the instantaneous field of view (IFOV) is located. For each IFOV, an h&#45;dimensional vector {b<sub>1</sub>,b<sub>2</sub>, . . . b<sub>&#951;</sub>} is derived by means of a multispectral sensor set. The vector {b<sub>1</sub>,b<sub>2</sub>, . . . b<sub>&#951;</sub>} represents the average spectral properties of an IFOV of the scene. This vector is named a picture element (pixel) of a multi&#45;spectral image. In other words, the IFOV is a physical area in the scene, while the pixel is the digital number (DN) in the image. Let the multi&#45;spectral image <b>g</b> = {g<sub>i</sub>} be formed by the group of pixels according to the following set gi = {b<sub>j</sub>(k,l)}<sub>i</sub>, &forall; i. Where i &isin; &#8469; is the set {1,2, . . . &#951;} representing the collection of bands of the multispectral image.</font></p>  	    <p align="justify"><font face="verdana" size="2">On the other hand, let Xi be the set</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for1.jpg"></font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">Where m =8 in most cases. The cartesian product X<sup>&#951;</sup> = X<sub>1</sub> &times; X<sub>2</sub> &times; . . . X&#951; defines the set of the ordered &#951;&#45;tuple (x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>). We equate xi = bi, therefore (b<sub>1</sub>,b<sub>2</sub>, . . . b<sub>&#951;</sub>) is an &#951;&#45;tuple in this cartesian coordinate system. To every h&#45;tuple (b1,b2, . . . bh), a vector u is associated: <b>u</b>(x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>) &lArr; (b<sub>1</sub>,b<sub>2</sub>, . . . b<sub>&#951;</sub>).</font></p>  	    <p align="justify"><font face="verdana" size="2">The set of vectors {<b>u</b>(x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>)} is the result of the mapping of the multispectral image onto a vector field. We note that not every h&#45;tuple (x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>), has a vector associated to the vector field, and an n&#45;tuple (x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>) may have more than one vector associated to the vector field. Hence, the vector field associated with the multispectral image is the set of vectors <b>U</b> = {<b>u</b>(x<sub>1</sub>,x<sub>2</sub>, . . . x<sub>&#951;</sub>)}.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Multispectral gradient</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">Once the multispectral image is modeled as a vector field, we may proceed to define a multispectral edge. Let vc be a moving window that systematically scans, pixel by pixel, the whole image. The window vc is of size 3&times;3 pixels. Let D(<b>g</b>) be the domain of the image, thus the condition that v<sub>c</sub> &sub; D(<b>g</b>) determines that the border pixels of the image cannot be processed.</font></p>  	    <p align="justify"><font face="verdana" size="2">Let the vector <i><b>p</b><sub>c</sub></i> be the central pixel of such window and let <i><b>p</b><sub>1</sub>, <b>p</b><sub>2</sub>, . . . <b>p</b><sub>8</sub></i> be the neighboring pixels of <i><b>p</b><sub>c</sub></i>. The set of pixels {<i><b>p</b><sub>i</sub></i>}, i = 1, 2, . . . 8 is the 8&#45;connected neighbor set of pc. We obtain the vector difference of the central pixel with all neighboring pixels of the window</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for2.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">The vector of the window that makes the largest difference is written in an output multispectral image named <b>f</b></font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for3.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">Equation (3) means that central pixel <i><b>p</b><sub>c</sub></i>, in moving window, is replaced by neighboring pixel <i><b>p</b><sub>i</sub></i> with the largest Euclidiean distance to the central pixel.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">The vector difference is calculated employing the Euclidian distance</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for4.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">The image <b>f</b> contains the edge information across the bands of the original image <b>g</b>. Image <b>f</b> is dubbed the multispectral gradient (<a href="#f3">Figure 3</a>).</font></p>  	    <p align="justify"><font face="verdana" size="2">Average of bands of output edge image <b>f</b> is calculated in order to concentrate the information on a single image. Principal components analysis may be applied as well to output image <b>f</b> to concentrate in the first component the edge content of the multispectral&#45;edge image. We use the average of the output image bands.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Derivation of</i> &#951;&#45;<i>dimensional Laplacian</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">A Laplacian is widely used as an edge operator (Pratt, 2001). Nevertheless, actual Laplacian is applied to each separate band of a multispectral image. A multispectral Laplacian is needed to extract edge content from the ensemble of the bands as a whole.</font></p>  	    <p align="justify"><font face="verdana" size="2">We begin with the consideration of the Laplacian in continuous space, and then we write the result in discrete space. Let g(<i>x,y</i>) &isin;&#8477;<sup>2</sup> be a function that describes a single band image where (x,y) are the coordinates of a pixel in this image. We initiate this step with the use of the equations</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for5_6.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">A detailed explanation on the derivation of equations (5) and (6) is provided in Lira (2010). In equations (5) and (6), &#8497; stands for Fourier transform, <i>G</i>(<i>&#969;<sub>x</sub>,&#969;<sub>y</sub></i>) is the Fourier transform of the image g(<i>x,y</i>) and j is the complex number &radic;&#45;1. In equations (5) and (6), (<i>x,y</i>) are spatial coordinates in image domain, whereas (<i>w<sub>x</sub>, w<sub>y</sub></i>) are spatial frequencies in Fourier domain.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">From equations (5) and (6) we have the Fourier transform of the Laplacian</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for7.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">Equation (7) is dubbed the scalar Laplacian.</font></p>  	    <p align="justify"><font face="verdana" size="2">On the grounds of results given by equation (7), we may generalize the Fourier transform of the Laplacian to n&#45;dimensions. Let <b>f</b>(<b>r</b>) &isin;&#8477;<sup>&#951;</sup>, be a vector valued function that describes a multispectral image formed by n&#45;bands. The vector <b>f</b>(<b>r</b>) = {f<sub>1</sub>(<i>x,y</i>), f<sub>2</sub>(<i>x,y</i>), . . . f<sub>&#951;</sub>(<i>x,y</i>)} represents the values of a pixel through the bands, i.e., the image value at a pixel location r = (x,y) &isin; &#8477;<sup>&#951;</sup>. The function <b>f</b>(<b>r</b>) is a vector field that describes the multispectral image according to lineaments described in section 3.1 (Lira and Rodriguez, 2006). The Fourier transform of <b>f</b>(<b>r</b>) is then (Bracewell, 2003; Ebling and Scheuermann, 2005)</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for8.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">The Fourier transform of the vector field <b>f</b>(<b>r</b>) produces a vector valued &#8497; function in Fourier space, namely, <b>F</b>(<i>&#969;</i>) = &#91;<b>f</b>(<b>r</b>)&#93;. The vector <b>F</b>(<i>&#969;</i>) = {F<sub>1</sub>(<i>&#969;</i><sub>1</sub>, <i>&#969;</i><sub>2</sub>), F<sub>2</sub>(<i>&#969;</i><sub>1</sub>, <i>&#969;</i><sub>2</sub>), . . . F<sub>&#951;</sub>(<i>&#969;</i>1, <i>&#969;</i>2)}, represents the spatial frequency content of the image at the location <i>&#969;</i> = (<i>&#969;</i><sub>1</sub>,<i>&#969;</i><sub>2</sub>). In &#8477;<sup>&#951;</sup>, the coordinates in Fourier domain (w1, w2), and spatial domain (x, y), cover the same range, 1 &le; (x, w<sub>1</sub>) &le; M and 1 &le; (y, <i>&#969;</i><sub>2</sub>) &le; N, but their meaning is different: (<i>x, y</i>) represents spatial coordinates, while (<i>&#969;</i><sub>1</sub>, <i>&#969;</i><sub>2</sub>) represents spatial frequencies.</font></p>  	    <p align="justify"><font face="verdana" size="2">In discrete space <i>&#8484;<sup>&#951;</sup></i>, the coordinates in Fourier domain <b><i>k</i></b> = (<i>k<sub>1</sub>, k<sub>2</sub></i>), and spatial domain <b><i>q</i></b> = (<i>m, n</i>), cover the same range, 1 &le; (<i>m, k<sub>1</sub></i>) &le; M and 1 &le; (<i>n, k<sub>2</sub></i>) &le; N. If <b>f</b>(<b>q</b>) &isin; <i>&#8484;<sup>&#951;</sup></i>, where (<i>m, n; k<sub>1</sub>, k<sub>2</sub></i>) &isin; <i>&#8484;<sup>&#951;</sup></i>, then the discrete version of equation (8) is</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for9.jpg"></font></p>  	    <p align="justify"><font face="verdana" size="2">Where <b>f</b>(<b>q</b>) = {f<sub>1</sub>(m, n), f<sub>2</sub>(m, n), . . . fh(m, n)} and <b>F(k) </b>= {F<sub>1</sub>(<i>k<sub>1</sub>, k<sub>2</sub></i>), F<sub>2</sub>(<i>k<sub>1</sub>, k<sub>2</sub></i>), . . . F<sub>&#951;</sub>(<i>k<sub>1</sub>, k<sub>2</sub></i>)}. The Laplacian in <i>&#8484;<sup>&#951;</sup></i> of the vector field <b>f</b>(<b>q</b>) is therefore</font></p>  	    <p align="center"><font face="verdana" size="2"><img src="/img/revistas/geoint/v53n3/a5for10.jpg"></font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">Where <b>F</b>(<b>k</b>) = &#91;<b>f</b>(<b>q</b>)&#93;. This equation can be applied to a multispectral image to derive edge content through the bands. Note that equation (7) is a particular case of equation (10). Equation (10) is dubbed the multispectral Laplacian.</font></p>  	    <p align="justify"><font face="verdana" size="2">To calculate this multispectral Laplacian, we first obtain the Fourier transform of the vector field associated to the image to produce <b>F</b>(<b>k</b>). In Fourier space, we multiply the result by &#150; (2&#960;)<sup>2</sup>&#9474;<b>k</b>&#9474;<sup>2</sup> and apply the inverse Fourier transform to obtain the multispectral Laplacian (<a href="#f3">Figure 3</a>).</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Evaluation of edges</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">The criteria to evaluate the edge enhancement resulting from our methods and from widely known edge operators are divided in qualitative and quantitative. The edges produced by the urban network of streets, avenues, buildings, idle lots and parks occur at random directions in the images. Due to this randomness, a profile of pixel values along any direction is representative of the edge content of the images. We considered pixel values profiles along several directions. We analyzed such profiles for widely known edge operators and for outputs of our methods. We present the plots of two profiles for each sensor, and we include two graphs that condense the behavior of ten profiles for each sensor: ASTER and IKONOS. In total, we analyzed twenty profiles. From these plots, we derive a qualitative and quantitative evaluation as described below. Black dots in <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a>, <a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">6</a>, <a href="/img/revistas/geoint/v53n3/a5f7.jpg" target="_blank">7</a>, and <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a> indicate the lines where the plots were extracted. <a href="/img/revistas/geoint/v53n3/a5f11.jpg" target="_blank">Figures 11</a>, <a href="/img/revistas/geoint/v53n3/a5f12.jpg" target="_blank">12</a>, <a href="/img/revistas/geoint/v53n3/a5f13.jpg" target="_blank">13</a>, and <a href="/img/revistas/geoint/v53n3/a5f14.jpg" target="_blank">14</a> indicate the line, column and angle of the location of profiles.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Qualitative evaluation</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">We display in a high&#45;resolution monitor the edge enhanced images. We display as well the first principal component of both images. A detailed visual inspection is carried out. On the grounds of previously published work on qualitative image evaluation (Escalante&#45;Ramirez and Lira, 1996), each edge&#45;enhanced image was rated according to the following qualitative criteria: general quality, sharpness, contrast, and noisiness. In addition, we evaluated the number of gray levels and definition of edges. Since the first principal component of the images accumulates most of the variance, we compare the edge enhancement with this component. The aim of this comparison is to evaluate, according to the above criteria, the degree of edge enhancement with respect to the original edge information content of the images.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Quantitative evaluation</i></font></p> </blockquote>     ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">We use several indicators to perform a quantitative evaluation (<a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">Figure 4</a>): Slope &#150; the more steepness the better the definition of the slope of an edge. Widening &#150; a width as close as possible to the original edge the better. Spatial location &#150; the closest of the enhanced edge to the original location the better. Contrast &#150; the highest the contrast the better.</font></p>  	    <p align="justify"><font face="verdana" size="2">A computer code was developed for quantitative evaluation. An image is displayed in a high resolution monitor. With the help of a cursor, a line of the image is selected. The profile of pixel values is shown in a plot. A profile is selected that contains one of the edge models given in <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4</a>. A spline is obtained for the selected edge&#45;model. From such spline, the parameters indicated in the models of <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4</a> are calculated. There are many types of edges in the images. To obtain a coherent quantitative evaluation of edges, we considered three types that occur frequently in the images. <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">Figure 4</a> shows a schematic diagram of such types where the above indicators are depicted. We performed such measurement for an ensemble of edges. <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">Figure 4(c)</a> shows a profile that occurs only in Laplacian and Kirsch operators. The computation of indicators is as follows.</font></p>  	    <p align="justify"><font face="verdana" size="2">Slope &#150; we measure the slope as the angle of the borders of an edge with respect to the vertical direction. Widening &#150; we measure the maximum width of an edge in pixels. Spatial location &#150; we identify the spatial coordinate of the center of an edge. Contrast &#150; we measure the contrast as the difference between the maximum value and the minimum value of an edge.</font></p>  	    <p align="justify"><font face="verdana" size="2">In order to complement our evaluation of edge enhancement we developed a computer code for the Canny and Cumani operators (Koschan and Abidi, 2005; Evans and Liu, 2006). The computer code was designed following the method explained in the article by Koschan and Abidi (2005). Two RGB false color composites were produced using the first three bands of ASTER and IKONOS images. Upon these images, the Canny and Cumani operators were applied. Such operators consist of a two&#45;step procedure. The first step is the enhancement of the edges; the second step is the detection of the edges by means of a threshold operation. We present results only for the enhancement of the edges. Both operators, Canny and Cumani, carry a number of parameters that require a determination by heuristic procedures. There are no analytical methods to estimate such parameters in an optimal design. Instead, our methods are parameter&#45;free.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>Results and discussion</b></font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Results</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">The necessary algorithms to apply the methods described in previous section were developed using Delphi language running under Windows 7 in a PC. Several edge products are presented in our work. They are organized in two groups: (a) edges from widely used edge operators, (b) edges derived from the methods developed in our work. These groups are analyzed. In order to facilitate the comparison of these results, four mosaics of selected regions of the images were prepared. These mosaics include the multispectral edges derived from our method and results from the above mentioned edge operators. Boxes on <a href="/img/revistas/geoint/v53n3/a5f1.jpg" target="_blank">figures 1</a> and <a href="/img/revistas/geoint/v53n3/a5f2.jpg" target="_blank">2</a> show the areas from which these mosaics were extracted. The mosaic prepared from boxes on the left of <a href="/img/revistas/geoint/v53n3/a5f1.jpg" target="_blank">figures 1</a> and <a href="/img/revistas/geoint/v53n3/a5f2.jpg" target="_blank">2</a> are dubbed mosaic A, and those on the right are dubbed mosaic B.</font></p>  	    <p align="justify"><font face="verdana" size="2">A set of profiles are produced to evaluate the performance of edge enhancement of the methods compared in this research. Profiles are compared. A profile from the first principal component of the original image is compared against the profiles of all edge enhancement methods considered in our work.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">The mosaics are used to perform the qualitative evaluation as discussed in previous section. The profiles are used to develop the quantitative evaluation as discussed in previous section. The above&#45;mentioned groups show the following results.</font></p>         <p align="justify"><font face="verdana" size="2">1) Edges from vector differences in a moving window (multispectral gradient).</font></p>     <p align="justify"><font face="verdana" size="2">As explained in Section 3.1, a multispectral edge image is obtained. This multispectral image carries the same number of bands as the input image. The average of the bands of such multispectral edge image was used for quantitative evaluation. <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">Figures 5</a> and <a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">6</a> shows the enhancement of edges of the ASTER image resulting from such procedure. <a href="/img/revistas/geoint/v53n3/a5f7.jpg" target="_blank">Figures 7</a> and <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a> depict the enhancement of edges of the IKONOS image. For visual purposes, a linear saturation enhancement was applied to <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a> &#45; <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a>. The quantitative evaluation was performed upon original results.</font></p>     <p align="justify"><font face="verdana" size="2">2) Edges from the multispectral Laplacian (Section 3.2).</font></p>     <p align="justify"><font face="verdana" size="2">The multispectral Laplacian derived from equation (10) was applied to both images, ASTER (<a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a> and <a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">6</a>) and IKONOS (<a href="/img/revistas/geoint/v53n3/a5f7.jpg" target="_blank">figures 7</a> and <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a>).</font></p>     <p align="justify"><font face="verdana" size="2">3) Edges from the first principal component of images.</font></p>     <p align="justify"><font face="verdana" size="2">The following edge operators were applied to the first principal component of ASTER and IKONOS images: Sobel, Frei&#45;Chen, Kirsch, scalar Laplacian, Prewitt and Roberts. Results are shown in <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a> and <a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">6</a> for ASTER image, and <a href="/img/revistas/geoint/v53n3/a5f7.jpg" target="_blank">figures 7</a> and <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a> for IKONOS image.</font></p>     <p align="justify"><font face="verdana" size="2">4) Edges from color operators</font></p>     <p align="justify"><font face="verdana" size="2">Two mosaics were prepared to show the results of Canny and Cumani operators (<a href="/img/revistas/geoint/v53n3/a5f9.jpg" target="_blank">Figure 9</a>). We applied a histogram saturation transformation to the images of the mosaics for visual appreciation purposes. An inspection of results shows an enhancement similar to the Sobel operator (<a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">Figure 6</a>). There are two limitations to the Canny and Cumani operators. The first one is that they carry a number of parameters that need to be defined by experimental procedure. The second one is that they work for RGB color images only; no generalization exists for an arbitrary number of bands of a multispectral image.</font></p>  	    <p align="justify"><font face="verdana" size="2">The profiles for all edge enhancement methods are shown in <a href="/img/revistas/geoint/v53n3/a5f11.jpg" target="_blank">figures 11</a> and <a href="/img/revistas/geoint/v53n3/a5f12.jpg" target="_blank">12</a> for ASTER mosaics and <a href="/img/revistas/geoint/v53n3/a5f13.jpg" target="_blank">figures 13</a> and <a href="/img/revistas/geoint/v53n3/a5f14.jpg" target="_blank">14</a> for IKONOS mosaics.</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">In order to complement the procedure of profile extraction <a href="/img/revistas/geoint/v53n3/a5f11.jpg" target="_blank">(Figures 11</a> &#45; <a href="/img/revistas/geoint/v53n3/a5f14.jpg" target="_blank">14</a>), a mosaic of strip&#45;images was prepared (<a href="/img/revistas/geoint/v53n3/a5f10.jpg" target="_blank">Figure 10</a>). The strip consist of a sub&#45;image of 21 pixels long by 11 pixels wide. The dots indicate the line of pixels related to the profile. The mosaic is formed by 6 strips, one for each image of <a href="/img/revistas/geoint/v53n3/a5f6.jpg" target="_blank">figure 6</a>. We present one mosaic of strips.</font></p>  	    <p align="justify"><font face="verdana" size="2">5) The indicators (<a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">Figure 4</a>) described in quantitative evaluation were measured for twenty profiles: ten for ASTER image and ten for IKONOS image. The measurement was carried out for the whole ensemble of edge operators considered in our research. Such measurement includes the first principal component of ASTER and IKONOS images. The value of the indicators was compared with the value of the original profile extracted from the first principal component. This comparison was calculated in relative error percentage and condenses in a single graph. The relative error percentage is the difference of an indicator from an edge enhanced image (Ie) minus the indicator from the first principal component (Icp) normalized by (Icp). <a href="/img/revistas/geoint/v53n3/a5f15.jpg" target="_blank">Figure 15</a> shows the graph that summarizes the quantitative evaluation of the profiles. For ASTER image, <a href="/img/revistas/geoint/v53n3/a5f15.jpg" target="_blank">figure 15(a)</a> depicts the relative error percentage with respect to the original profile in first principal component. <a href="/img/revistas/geoint/v53n3/a5f15.jpg" target="_blank">Figure 15(b)</a> show results for IKONOS image. Angles q1 and q2 are not included in <a href="/img/revistas/geoint/v53n3/a5f15.jpg" target="_blank">figure 15</a> for multispectral Laplacian and for Kirsch operators since, as explained above, the profile of <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4(c)</a> does not occur in the original image. Such operators introduce an inversion of contrast described in <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4(c)</a>. None the less, the profile&#45;type of <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4(c)</a> was compared among multispectral Laplacian and Kirsh operators. The contrast for all operators is presented in <a href="/img/revistas/geoint/v53n3/a5f16.jpg" target="_blank">figure 16</a> for both sensors.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Discussion</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">Our discussion is divided in qualitative and quantitative evaluation as described in Section 3.4. The next two sections provide detailed description of such evaluation.</font></p>  	    <blockquote> 	      <p align="justify"><font face="verdana" size="2"><i>Qualitative discussion</i></font></p> </blockquote>     <p align="justify"><font face="verdana" size="2">A visual inspection of results, using the qualitative criteria described in Section 3.3, produces higher rating for our methods in comparison with any other edge&#45;enhancement method considered in our research. For such inspection, we employed <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a> to <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a>. In particular, and on the grounds of such rating, we may list the following evaluation</font></p>  	    <blockquote> 		    <p align="justify"><font face="verdana" size="2">(a) Edges from Sobel, Frei&#45;Chen, Prewitt and Roberts operators are widened for both images. The images from these operators appear unsharpened. The contrast is high and has a noisy appearance. Thin lines, points and linear objects are blurred or obliterated.</font></p>  		    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">(b) Edges from the Kirsch operator show a relief&#45;like appearance of urban buildings structure. The relief&#45;like appearance is derived from the second derivative involved in the definition of this operator. Results look somewhat unsharpened and contrast is relatively small. There is no noisy appearance. Thin edges, points and linear objects are blurred.</font></p>  		    <p align="justify"><font face="verdana" size="2">(c) Edges from the scalar Laplacian operator are less widened than other operators. Results are sharp, thin edges, points and linear objects are preserved. However the contrast is low. No&#45;noisy appearance is observed.</font></p>  		    <p align="justify"><font face="verdana" size="2">(d) The average of the bands of the image resulting from the multispectral gradient show sharp edges with good contrast. The contrast is higher than the scalar gradient, details such as thin lines and points are preserved. No noisy appearance is observed.</font></p>  		    <p align="justify"><font face="verdana" size="2">(e) The edge image resulting from the multispectral Laplacian show a relief&#45;like appearance with better definition and similar than the Kirsch operator. The relief appearance of the multispectral Laplacian is sharpening with better preservation of fine details than the scalar Laplacian. The contrast is high and edges are sharp. No noise is observed.</font></p>  		    <p align="justify"><font face="verdana" size="2">(f) The sharpness of edges, the contrast, the noisiness appearance, and general quality of multispectral gradient and multispectral Laplacian are better than the edge operators compared in our work (<a href="/img/revistas/geoint/v53n3/a5f15.jpg" target="_blank">Figure 15</a>).</font></p>           <p align="justify"><font face="verdana" size="2"><i>Quantitative discussion</i></font></p> 	</blockquote>      <p align="justify"><font face="verdana" size="2">As shown in <a href="/img/revistas/geoint/v53n3/a5f5.jpg" target="_blank">figures 5</a> &#45; <a href="/img/revistas/geoint/v53n3/a5f8.jpg" target="_blank">8</a>,  the dots on the border of the mosaics indicate the lines were pixel values profiles are extracted. These lines were selected to include sharp edges such as the lines of the landing fields of the airport and abrupt change of pixel values due to constructions or particular features with high contrast. The profiles extracted from the first principal component are compared to the profiles extracted form edge enhancement images. Many profiles were inspected at random. A selection of profiles was performed when they contained at least one of the edge&#45;models of <a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">figure 4</a>. We measured the above&#45;described indicators (<a href="/img/revistas/geoint/v53n3/a5f4.jpg" target="_blank">Figure 4</a>) for twenty selected edge profiles: those with the best definition. From such measurements, we derived a list of conclusions.</font></p>      <p align="justify"><font face="verdana" size="2">Profiles of selected lines of the ASTER and IKONOS image&#45;mosaics show the following:</font></p>  	    <blockquote> 		    <p align="justify"><font face="verdana" size="2">(1) Sobel, Frei&#45;Chen and Roberts operators wide and smooth the profiles of the original edges of the images.</font></p>  		    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2">(2) Kirsch and Prewitt operators wide and smooth the profiles but in a less degree than Sobel, Frei&#45;Chen and Roberts operators.</font></p>  		    <p align="justify"><font face="verdana" size="2">(3) The relief&#45;like appearance of the Kirsch images is due to the contrast inversion of some edges of the original profile.</font></p>  		    <p align="justify"><font face="verdana" size="2">(4) The scalar Laplacian operator does not wide nor smoothes the edges but reduces the contrast of the edges.</font></p>  		    <p align="justify"><font face="verdana" size="2">(5) The multispectral gradient and the multispectral Laplacian do not wide nor smooth the edges, and in addition to this, increase the contrast of the edges.</font></p>  		    <p align="justify"><font face="verdana" size="2">(6) The multispectral gradient and the multispectral Laplacian show good contrast of the enhanced edges.</font></p>  		    <p align="justify"><font face="verdana" size="2">(7) The spatial location error is highest for Roberts operator. The least error is for the scalar Laplacian.</font></p>  		    <p align="justify"><font face="verdana" size="2">(8) The steepness of the enhanced edges is less than the original edges for those operators that smooth and wide the edges.</font></p>  		    <p align="justify"><font face="verdana" size="2">(9) Overall, the multispectral gradient and the multispectral Laplacian show good conditions of contrast, steepness, spatial location and definition of edges with respect to the other operators.</font></p> 	</blockquote>  	    <p align="justify"><font face="verdana" size="2">Possible applications for multispectral edge enhancement are: identification of linear feature for geologic environments, identification of ancient highways in archeological studies, delineation of coastlines, studies of urban structures, delineation of water bodies and studies of coastal current patterns.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    ]]></body>
<body><![CDATA[<p align="justify"><font face="verdana" size="2"><b>Conclusions</b></font></p>  	    <p align="justify"><font face="verdana" size="2">Two methods to extract edges from multispectral images are designed and discussed in this research. Such methods require the modeling of the original multispectral image as a vector field. Upon this vector field, we applied two vector operators to extract the edge content originally distributed through the bands of the images. These methods are parameter&#45;free. A qualitative and quantitative evaluation show that our methods perform better than widely used edge enhancement procedures. The basic reason for this is that our methods extract the edge&#45;content distributed through the original bands of a multispectral image. Our methods are not computing demanding, we use a fast Fourier transform to calculate the multispectral Laplacian. The calculation of the multispectral gradient is fast since it involves vector differences in a moving window. On a PC under Windows 7, the computing time for a 2000 &acute; 2000 pixels multispectral image with 6 bands does not exceed three minutes. Our methods work for multispectral images with any number of bands, the limit is set by the available memory. A test on hyperspectral images is not yet performed.</font></p>  	    <p align="justify"><font face="verdana" size="2">&nbsp;</font></p>  	    <p align="justify"><font face="verdana" size="2"><b>References</b></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Bigand A., Bouwmans T., Dubus J.P., 2001, Extraction of line segments from fuzzy images, <i>Pattern Recognition Letters</i>, 22, 13, 1405 &#150; 1418.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933630&pid=S0016-7169201400030000500001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Bowyer K., Kranenburg C., Doughert S., 2001, Edge detector evaluation using empirical ROC curves, <i>Computer Vision and Image Understanding</i>, 84, 1, 77 &#150; 103.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933632&pid=S0016-7169201400030000500002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Bracewell R.N., 2003, Fourier <i>Analysis and Imaging</i>, Kluwer Academic, New York.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933634&pid=S0016-7169201400030000500003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    ]]></body>
<body><![CDATA[<!-- ref --><p align="justify"><font face="verdana" size="2">Chen X., Chen H., 2010, A novel color edge detection algorithm in RGB color space, <i>IEEE 10th International Conference on Signal Processing</i>, Beijin, China, pp. 793 &#45; 796.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933636&pid=S0016-7169201400030000500004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Chu J., Miao J., Zhang G., Wang L., 2013, Edge and corner detection by color invariants, <i>Optics and Laser Technology</i>, 45, 756 &#45; 762.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933638&pid=S0016-7169201400030000500005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Ebling J., Scheuermann J., 2005, Clifford Fourier transform on vector fields, <i>IEEE Transactions on Visualization and Computer Graphics</i>, 11, 469 &#150; 479.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933640&pid=S0016-7169201400030000500006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Escalante&#45;Ram&iacute;rez B., Lira J., 1996, Perfor&#45;mance&#45;oriented analysis and evaluation of modern adaptive speckle reduction techniques in SAR images, <i>Proceedings, SPIE's Visual Information Processing V</i>, Orlando, Florida, 2753, pp. 18&#45;27.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933642&pid=S0016-7169201400030000500007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Evans A.N., Liu X.U., 2006, A morphological gradient approach to color edge detection, <i>IEEE Transactions on Image Processing</i>, 15, 1454 &#45; 1463.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933644&pid=S0016-7169201400030000500008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    ]]></body>
<body><![CDATA[<!-- ref --><p align="justify"><font face="verdana" size="2">Fan J.P., Aref W.G., Hacid H.S., EL Maguimid A.K., 2001, An improved automatic isotropic color edge detection techniques, <i>Pattern Recognition Letters</i>, 22, 13, 1419 &#150; 1429.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933646&pid=S0016-7169201400030000500009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Gao C.B., Zhou J.L., Hu J.R., Lang F.N., 2011, Edge detection of colour image based on quaternion fractional differential, <i>IET Image Processing</i>, 5, 261 &#45; 272.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933648&pid=S0016-7169201400030000500010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Koschan A., Abidi M., 2005, Detection and classification of edges in color images, <i>IEEE Signal Processing</i>, 22, 1, 64 &#150; 73.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933650&pid=S0016-7169201400030000500011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Li G.D., Min L.Q., Zang H.Y., 2008, Color edge detections based on Cellular Neural Network, <i>International Journal of Bifurcation and Chaos</i>, 18, 4, 1231 &#150; 1242.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933652&pid=S0016-7169201400030000500012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Lira J., Rodr&iacute;guez A., 2006, A Divergence Operator to Quantify Texture From Multi&#45;spectral Satellite Images, <i>International Journal of Remote Sensing</i>, 27, 2683 &#150; 2702.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933654&pid=S0016-7169201400030000500013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    ]]></body>
<body><![CDATA[<!-- ref --><p align="justify"><font face="verdana" size="2">Lira J., 2010, <i>Tratamiento Digital de Im&aacute;genes Multiespectrales</i>, <a href="http://www.lulu.com" target="_blank">www.lulu.com</a></font>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933656&pid=S0016-7169201400030000500014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p align="justify"><font face="verdana" size="2">Nezhadarya E., Kreidieh R., 2011, A new scheme for robust gradient vector estimation on color image, <i>IEEE Transactions on Image Processing</i>, 20, 2211 &#45; 2220.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933657&pid=S0016-7169201400030000500015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Pratt W.K., 2001, <i>Digital Image Processing</i>, Wiley Interscience, New York.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933659&pid=S0016-7169201400030000500016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Sundaram R., 2003, Analysis and implementation of an efficient edge detection algorithm, <i>Optical Engineering</i>, 42, 642 &#150; 650.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933661&pid=S0016-7169201400030000500017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Xu J., Ye L., Luo W., 2010, Color edge detection using multiscale quaternion convolution, <i>International Journal of Imaging Systems and Technology</i>, 20, 354 &#45; 358.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933663&pid=S0016-7169201400030000500018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>  	    <!-- ref --><p align="justify"><font face="verdana" size="2">Yoshida H., 2003, Multiscale edge&#45;guided wavelet snake model for delineation of pulmonary nodules in chest radiographs, <i>Journal of Electronic Imaging</i>, 12, 1, 69 &#150; 80.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=3933665&pid=S0016-7169201400030000500019&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></font></p>      ]]></body><back>
<ref-list>
<ref id="B1">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bigand]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Bouwmans]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Dubus]]></surname>
<given-names><![CDATA[J.P.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Extraction of line segments from fuzzy images]]></article-title>
<source><![CDATA[Pattern Recognition Letters]]></source>
<year>2001</year>
<volume>22</volume>
<numero>13</numero>
<issue>13</issue>
<page-range>1405 - 1418</page-range></nlm-citation>
</ref>
<ref id="B2">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bowyer]]></surname>
<given-names><![CDATA[K.]]></given-names>
</name>
<name>
<surname><![CDATA[Kranenburg]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
<name>
<surname><![CDATA[Doughert]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Edge detector evaluation using empirical ROC curves]]></article-title>
<source><![CDATA[Computer Vision and Image Understanding]]></source>
<year>2001</year>
<volume>84</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>77 - 103</page-range></nlm-citation>
</ref>
<ref id="B3">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bracewell]]></surname>
<given-names><![CDATA[R.N.]]></given-names>
</name>
</person-group>
<source><![CDATA[Fourier Analysis and Imaging]]></source>
<year>2003</year>
<publisher-name><![CDATA[Kluwer AcademicNew York]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chen]]></surname>
<given-names><![CDATA[X.]]></given-names>
</name>
<name>
<surname><![CDATA[Chen]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<source><![CDATA[A novel color edge detection algorithm in RGB color space]]></source>
<year>2010</year>
<page-range>793 - 796</page-range><publisher-loc><![CDATA[Beijin ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B5">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chu]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Miao]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[G.]]></given-names>
</name>
<name>
<surname><![CDATA[Wang]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Edge and corner detection by color invariants]]></article-title>
<source><![CDATA[Optics and Laser Technology]]></source>
<year>2013</year>
<volume>45</volume>
<page-range>756 - 762</page-range></nlm-citation>
</ref>
<ref id="B6">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Ebling]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Scheuermann]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Clifford Fourier transform on vector fields]]></article-title>
<source><![CDATA[IEEE Transactions on Visualization and Computer Graphics]]></source>
<year>2005</year>
<volume>11</volume>
<page-range>469 - 479</page-range></nlm-citation>
</ref>
<ref id="B7">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Escalante-Ramírez]]></surname>
<given-names><![CDATA[B.]]></given-names>
</name>
<name>
<surname><![CDATA[Lira]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Perfor-mance-oriented analysis and evaluation of modern adaptive speckle reduction techniques in SAR images]]></article-title>
<source><![CDATA[Proceedings, SPIE's Visual Information Processing V]]></source>
<year>1996</year>
<page-range>18-27</page-range><publisher-loc><![CDATA[Orlando^eFlorida Florida]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B8">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Evans]]></surname>
<given-names><![CDATA[A.N.]]></given-names>
</name>
<name>
<surname><![CDATA[Liu]]></surname>
<given-names><![CDATA[X.U.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A morphological gradient approach to color edge detection]]></article-title>
<source><![CDATA[IEEE Transactions on Image Processing]]></source>
<year>2006</year>
<volume>15</volume>
<page-range>1454 - 1463</page-range></nlm-citation>
</ref>
<ref id="B9">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Fan]]></surname>
<given-names><![CDATA[J.P.]]></given-names>
</name>
<name>
<surname><![CDATA[Aref]]></surname>
<given-names><![CDATA[W.G.]]></given-names>
</name>
<name>
<surname><![CDATA[Hacid]]></surname>
<given-names><![CDATA[H.S.]]></given-names>
</name>
<name>
<surname><![CDATA[EL Maguimid]]></surname>
<given-names><![CDATA[A.K.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An improved automatic isotropic color edge detection techniques]]></article-title>
<source><![CDATA[Pattern Recognition Letters]]></source>
<year>2001</year>
<volume>22</volume>
<numero>13</numero>
<issue>13</issue>
<page-range>1419 - 1429</page-range></nlm-citation>
</ref>
<ref id="B10">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gao]]></surname>
<given-names><![CDATA[C.B.]]></given-names>
</name>
<name>
<surname><![CDATA[Zhou]]></surname>
<given-names><![CDATA[J.L.]]></given-names>
</name>
<name>
<surname><![CDATA[Hu]]></surname>
<given-names><![CDATA[J.R.]]></given-names>
</name>
<name>
<surname><![CDATA[Lang]]></surname>
<given-names><![CDATA[F.N.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Edge detection of colour image based on quaternion fractional differential]]></article-title>
<source><![CDATA[IET Image Processing]]></source>
<year>2011</year>
<volume>5</volume>
<page-range>261 - 272</page-range></nlm-citation>
</ref>
<ref id="B11">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Koschan]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Abidi]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Detection and classification of edges in color images]]></article-title>
<source><![CDATA[IEEE Signal Processing]]></source>
<year>2005</year>
<volume>22</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>64 - 73</page-range></nlm-citation>
</ref>
<ref id="B12">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Li]]></surname>
<given-names><![CDATA[G.D.]]></given-names>
</name>
<name>
<surname><![CDATA[Min]]></surname>
<given-names><![CDATA[L.Q.]]></given-names>
</name>
<name>
<surname><![CDATA[Zang]]></surname>
<given-names><![CDATA[H.Y.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Color edge detections based on Cellular Neural Network]]></article-title>
<source><![CDATA[International Journal of Bifurcation and Chaos]]></source>
<year>2008</year>
<volume>18</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>1231 - 1242</page-range></nlm-citation>
</ref>
<ref id="B13">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lira]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Rodríguez]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A Divergence Operator to Quantify Texture From Multi-spectral Satellite Images]]></article-title>
<source><![CDATA[International Journal of Remote Sensing]]></source>
<year>2006</year>
<volume>27</volume>
<page-range>2683 - 2702</page-range></nlm-citation>
</ref>
<ref id="B14">
<nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lira]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
</person-group>
<source><![CDATA[Tratamiento Digital de Imágenes Multiespectrales]]></source>
<year>2010</year>
</nlm-citation>
</ref>
<ref id="B15">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Nezhadarya]]></surname>
<given-names><![CDATA[E.]]></given-names>
</name>
<name>
<surname><![CDATA[Kreidieh]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A new scheme for robust gradient vector estimation on color image]]></article-title>
<source><![CDATA[IEEE Transactions on Image Processing]]></source>
<year>2011</year>
<volume>20</volume>
<page-range>2211 - 2220</page-range></nlm-citation>
</ref>
<ref id="B16">
<nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pratt]]></surname>
<given-names><![CDATA[W.K.]]></given-names>
</name>
</person-group>
<source><![CDATA[Digital Image Processing]]></source>
<year>2001</year>
<publisher-loc><![CDATA[New York ]]></publisher-loc>
<publisher-name><![CDATA[Wiley Interscience]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B17">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sundaram]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Analysis and implementation of an efficient edge detection algorithm]]></article-title>
<source><![CDATA[Optical Engineering]]></source>
<year>2003</year>
<volume>42</volume>
<page-range>642 - 650</page-range></nlm-citation>
</ref>
<ref id="B18">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Xu]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Ye]]></surname>
<given-names><![CDATA[L.]]></given-names>
</name>
<name>
<surname><![CDATA[Luo]]></surname>
<given-names><![CDATA[W.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Color edge detection using multiscale quaternion convolution]]></article-title>
<source><![CDATA[International Journal of Imaging Systems and Technology]]></source>
<year>2010</year>
<volume>20</volume>
<page-range>354 - 358</page-range></nlm-citation>
</ref>
<ref id="B19">
<nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Yoshida]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Multiscale edge-guided wavelet snake model for delineation of pulmonary nodules in chest radiographs]]></article-title>
<source><![CDATA[Journal of Electronic Imaging]]></source>
<year>2003</year>
<volume>12</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>69 - 80</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
