Scielo RSS <![CDATA[Computación y Sistemas]]> http://www.scielo.org.mx/rss.php?pid=1405-554620160004&lang=en vol. 20 num. 4 lang. en <![CDATA[SciELO Logo]]> http://www.scielo.org.mx/img/en/fbpelogp.gif http://www.scielo.org.mx <![CDATA[Editorial]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400561&lng=en&nrm=iso&tlng=en <![CDATA[Comparison of Local Feature Extraction Paradigms Applied to Visual SLAM]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400565&lng=en&nrm=iso&tlng=en Abstract The detection and description of locally salient regions is one of the most widely used low-level processes in modern computer vision systems. The general approach relies on the detection of stable and invariant image features that can be uniquely characterized using compact descriptors. Many detection and description algorithms have been proposed, most of them derived using different assumptions or problem models. This work presents a comparison of different approaches towards the feature extraction problem, namely: (1) standard computer vision techniques; (2) automatic synthesis techniques based on genetic programming (GP); and (3) a new local descriptor based on composite correlation filtering, proposed for the first time in this paper. The considered methods are evaluated on a difficult real-world problem, vision-based simultaneous localization and mapping (SLAM). Using three experimental scenarios, results indicate that the GP-based methods and the correlation filtering techniques outperform widely used computer vision algorithms such as the Harris and Shi-Tomasi detectors and the Speeded Up Robust Features descriptor. <![CDATA[Including Users Preferences in the Decision Making for Discrete Many Objective Optimization Problems]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400589&lng=en&nrm=iso&tlng=en Resumen En muchas aplicaciones, uno se enfrenta con el problema de que muchos objetivos tienen que ser optimizados simultáneamente lo que conlleva a tener un problema de optimización de muchos objetivos (MaOP, por sus siglas en ingles). Una característica importante de los problemas de optimización de muchos objetivos discretos es que su conjunto de soluciones, el llamado conjunto de Pareto, consiste de demasiados puntos para ser calculados de manera eficiente. Por lo tanto, aunque los algoritmos evolutivos especializados son en principio capaces de calcular un conjunto S de soluciones candidato bien esparcidas a lo largo del conjunto de Pareto, no se garantiza que el tomador de decisiones del problema en cuestión encontrara la solución 'ideal' dentro de S para su problema. Se argumenta en este trabajo que tiene sentido llevar a cabo una especie de post procesamiento para una solución dada s ∈ S. Específicamente, proponemos dos diferentes métodos que permiten dirigir la búsqueda desde s a lo largo del conjunto de Pareto en direcciones especificadas por el usuario. Resultados numéricos, en casos del problema de enrutamiento de vehículos con ventanas de tiempo, muestran la efectividad de los novedosos métodos propuestos.<hr/>Abstract In many applications one is faced with the problem that many objectives have to be optimized concurrently leading to a many objective optimization problem (MaOP). One important characteristic of discrete MaOPs is that its solution set, the so-called Pareto set, consists of too many elements to be efficiently computed. Thus, though specialized evolutionary algorithms are in principle capable of computing a set S of well spread candidate solutions along the Pareto set, it is not guaranteed that the decision maker of the underlying problem will find the 'ideal' solution within S for his or her problem. We argue in this paper that it makes sense to perform a kind of post-processing for a selected solution s ∈ S. More precisely, we will propose two different methods that allow to steer the search from s along the Pareto set into user specified directions. Numerical results on instances of the vehicle routing problem with time windows will show the effectivity of the novel methods. <![CDATA[Novelty Search for the Synthesis of Current Followers]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400609&lng=en&nrm=iso&tlng=en Abstract A topology synthesis method is introduced using genetic algorithms (GA) based on novelty search (NS). NS is an emerging meta-heuristic, that guides the search based on the novelty of each solution instead of the objective function. The synthesized topologies are current follower (CF) circuits; these topologies are new and designed using integrated circuit CMOS technology of 0.35μxm. Topologies are coded using a chromosome divided into four genes: small signal gene (SS), MOSFET synthesis gene (SMos), polarization gene (Bias) and current source synthesis gene (CM). The proposed synthesis method is coded in MatLab and uses SPICE to evaluate the CFs fitness. The GA based on NS (GA-NS) is compared with a standard objective-based GA, showing unique search dynamics and improved performance. Experimental results show twelve CFs synthesized by the GA-NS algorithm, and their main attributes are summarized and discussed. This work is the first to show that NS can be used as a promising alternative in the field of automatic circuit synthesis. <![CDATA[Performance Comparison of Evolutionary Algorithms for University Course Timetabling Problem]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400623&lng=en&nrm=iso&tlng=en Abstract In literature, University Course Timetabling Problem (UCTP) is a well known combinational problem. The main reasons to study this problem are the intrinsic importance at the interior of universities, the exponential number of solutions, and the distinct types of approaches to solve this problem. Due to the exponential number of solutions (combinations), this problem is categorized as NP-hard. Generally, Evolutionary Algorithms (EA) are efficient tools to solve this problem. Differential Evolution (DE) has been widely used to solve complex optimization problems on the continuous domain, Genetic Algorithms (GA) has been adopted to solve different types of problems and even as point of comparison between algorithms performance. This paper examines and compares the performance depicted by two approaches based on EA to solve the UCTP: the DE and the GA approaches. The experiments use a set of 3 real life UCTP instances, each instance contains different characteristics and are based on Mexican universities. In the experiments, we used the optimal input parameters for the solvers, and we performed a qualitative-quantitative comparison between the final solutions. The results showed the best performance for the solution based on the DE algorithm. This work can be easily extended to use other algorithms and UCTP instances. <![CDATA[Limiting the Velocity in the Particle Swarm Optimization Algorithm]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400635&lng=en&nrm=iso&tlng=en Abstract Velocity in the Particle Swarm Optimization algorithm (PSO) is one of its major features, as it is the mechanism used to move (evolve) the position of a particle to search for optimal solutions. The velocity is commonly regulated, by multiplying a factor to the particle's velocity. This velocity regulation aims to achieve a balance between exploration and exploitation. The most common methods to regulate the velocity are the inertia weight and constriction factor. Here, we present a different method to regulate the velocity by changing the maximum limit of the velocity at each iteration, thus eliminating the use of a factor. We go further and present a simpler version of the PSO algorithm that achieves competitive and, in some cases, even better results than the original PSO algorithm. <![CDATA[Semantic Textual Similarity Methods, Tools, and Applications: A Survey]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400647&lng=en&nrm=iso&tlng=en Abstract Measuring Semantic Textual Similarity (STS), between words/ terms, sentences, paragraph and document plays an important role in computer science and computational linguistic. It also has many applications over several fields such as Biomedical Informatics and Geoinformation. In this paper, we present a survey on different methods of textual similarity and we also reported about the availability of different software and tools those are useful for STS. In natural language processing (NLP), STS is a important component for many tasks such as document summarization, word sense disambiguation, short answer grading, information retrieval and extraction. We split out the measures for semantic similarity into three broad categories such as (i) Topological/Knowledge-based (ii) Statistical/ Corpus Based (iii) String based. More emphasis is given to the methods related to the WordNet taxonomy. Because topological methods, plays an important role to understand intended meaning of an ambiguous word, which is very difficult to process computationally. We also propose a new method for measuring semantic similarity between sentences. This proposed method, uses the advantages of taxonomy methods and merge these information to a language model. It considers the WordNet synsets for lexical relationships between nodes/words and a uni-gram language model is implemented over a large corpus to assign the information content value between the two nodes of different classes. <![CDATA[POS Tagging without a Tagger: Using Aligned Corpora for Transferring Knowledge to Under-Resourced Languages]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400667&lng=en&nrm=iso&tlng=en Abstract Almost all languages lack sufficient resources and tools for developing Human Language Technologies (HLT). These technologies are mostly developed for languages for which large resources and tools are available. In this paper, we deal with the under-resourced languages, which can benefit from the available resources and tools to develop their own HLT. We consider as an example the POS tagging task, which is among the most primordial Natural Language Processing tasks. The task is importatn because it assigns to word tags that highlight their morphological features by considering the corresponding contexts. The solution that we propose in this research work, is based on the use of aligned parallel corpus as a bridge between a rich-resourced language and an under-resourced language. This kind of corpus is usually available. The rich-resourced language side of this corpus is annotated first. These POS-annotations are then exploited to predict the annotation on the under-resourced language side by using alignment training. After this training step, we obtain a matching table between the two languages, which is exploited to annotate an input text. The experimentation of the proposed approach is performed for a pair of languages: English as a rich-resourced language and Arabic as an under-resourced language. We used the IWSLT10 training corpus and English TreeTagger 15. The approach was evaluated on the test corpus extracted from the IWSLT08 and obtained F-score of 89%. It can be extrapolated to the other NLP tasks. <![CDATA[Mention Detection for Improving Coreference Resolution in Russian Texts: A Machine Learning Approach]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400681&lng=en&nrm=iso&tlng=en Abstract Coreference resolution task is a well-known NLP application that was proven helpful for all high-level NLP applications: machine translation, summarization, and others. Mention detection is the sub-task of detecting the discourse status of each noun phrase, classifying it as a discourse-new, singleton (mentioned only once) or discourse-old occurrence. It has been shown that this task applied to a coreference resolution system may increase its overall performance. So, we decided to adapt current approaches for English language into Russian. We present some quality results of experiments regarding classifiers for mention detection and their application into the coreference resolution task in Russian languages. <![CDATA[Exudates and Blood Vessel Segmentation in Eye Fundus Images Using the Fourier and Cosine Discrete Transforms]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400697&lng=en&nrm=iso&tlng=en Abstract This paper presents a new method using discrete transforms to segment blood vessels and exudates in eye fundus color images. To obtain the desired segmentation, an illumination correction is previously done based on a homomorphic filter because of the uneven illuminance in the eye fundus image. To distinguish foreground objects from the background, we propose a super-Gaussian bandpass filter in the discrete cosine transform (DCT) domain. These filters are applied on the green channel that contains information to segment pathologies. To segment exudates in the filtered DCT image, a gamma correction is applied to enhance foreground objects; in the resulting image, the Otsu's global threshold method is applied, after which, a masking operation over the effective area of the eye fundus image is performed to obtain the final segmentation of exudates. In the case of blood vessels, the negative of the image filtered with DCT is first calculated, then a median filter is applied to reduce noise and artifacts, followed by a gamma correction. Again, the Otsu's global threshold method is used for binarization, next a morphological closing operation is employed, and a masking operation gives the corresponding final segmentation. Illustrative examples taken from a free clinical database are included to demonstrate the capability of the proposed methods. <![CDATA[A New Approach For Hand Gestures Recognition Based on Depth Map Captured by RGB-D Camera]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400709&lng=en&nrm=iso&tlng=en Abstract This paper introduces a new approach for hand gesture recognition based on depth Map captured by an RGB-D Kinect camera. Although this camera provides two types of information "Depth Map" and "RGB Image", only the depth data information is used to analyze and recognize the hand gestures. Given the complexity of this task, a new method based on edge detection is proposed to eliminate the noise and segment the hand. Moreover, new descriptors are introduce to model the hand gesture. These features are invariant to scale, rotation and translation. Our approach is applied on French sign language alphabet to show its effectiveness and evaluate the robustness of the proposed descriptors. The experimental results clearly show that the proposed system is very satisfactory as it to recognizes the French alphabet sign with an accuracy of more than 93%. Our approach is also applied to a public dataset in order to be compared in the existing studies. The results prove that our system can outperform previous methods using the same dataset. <![CDATA[A 3μW Low-Power CMOS Class-AB Bilateral Current Mirror for Low-Voltage Applications]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400723&lng=en&nrm=iso&tlng=en Abstract This paper presents a compact low-power bidirectional current mirror suitable for low-voltage applications. The key element is the use of a CMOS complementary input stage working in subthreshold regime; which allows setting a reduced bias current through the mirror. The circuit was simulated using LTSpice and presents class AB operation with a THD of 1% at 1MHz. The power consumption is close to 3μW as shown by simulations and experimental data from a fabricated prototype using 0.5µm CMOS technology. <![CDATA[Diversity Measures for Building Multiple Classifier Systems Using Genetic Algorithms]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400729&lng=en&nrm=iso&tlng=en Resumen En este trabajo se exponen las diferentes medidas de diversidad que existen en la literatura para decidir si un conjunto de clasificadores es diverso, aspecto que tiene gran importancia en la creación de los sistemas multi-clasificadores. Se presenta la modelación de la construcción de sistemas multi-clasificadores usando la meta-heurística de Algoritmos Genéticos para garantizar la mejor exactitud posible y la mayor diversidad entre los clasificadores. Se enuncian además varias formas de combinación para las medidas de diversidad. Por último se discuten dos experimentos en los que se analiza el comportamiento individual de las medidas de diversidad y los resultados de sus combinaciones.<hr/>Abstract In this paper we present the different diversity measures that exist in the literature to decide if a set of classifiers is diverse, aspect that is very important in the creation of multi-classifier systems. The modeling for building multi-classifier systems using meta-heuristic of Genetic Algorithm to ensure the best possible accuracy and greater diversity among the classifiers is presented. Various forms of combination for diversity measures are also enunciated. Finally, we discuss two experiments in which the individual behaviors of diversity measures and their combinations are analyzed. <![CDATA[A Mathematical Model for Optimizing Resources of Scientific Projects]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400749&lng=en&nrm=iso&tlng=en Resumen México, como economía emergente, requiere maximizar los resultados que se obtienen al invertir en proyectos de desarrollo. Una de las entidades públicas promotoras de estas inversiones es el Consejo Nacional de Ciencia y Tecnología (CONACYT). Los proyectos no siempre concluyen de manera exitosa, en especial por una planeación inadecuada de sus recursos, ocasionando retrasos, esfuerzo adicional, o fracasos. La presente investigación desarrolla y prueba un modelo matemático para determinar la factibilidad económica de los proyectos de innovación del CONACYT, comprobando la hipótesis de que los proyectos son una variante de la intratabilidad matemática, y por lo tanto, se puede obtener una aproximación bastante certera de los costos reales de los proyectos usando algoritmos y la teoría de la NP-Completo.<hr/>Abstract Mexico, as an emerging economy, requires maximizing the results obtained by investing in development projects. One of the promoters of these investments is the National Council of Science and Technology (CONACYT). The projects are well intentioned but not always successfully concluded, especially inadequate planning of its resources, causing delays, rework and in the worst case scenario, failure thereof, who in turn adversely affect the assignment of financial sponsorship or other projects. This research develops and testing a mathematical model to determine the economic feasibility of the innovation projects of CONACYT, testing the hypothesis that the projects are a variant of the mathematical intractability, and therefore, you can get a fairly accurate approximation of the actual project costs using algorithms and the theory of NP-Completeness. <![CDATA[MUREM: A Multiplicative Regression Method for Software Development Effort Estimation]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400763&lng=en&nrm=iso&tlng=en Resumen En este artículo se presenta un método multiplicativo de regresión para estimar el esfuerzo de desarrollo de software. Este método, al que denominamos MUREM, es el resultado de, por un lado, establecer un conjunto de condiciones iniciales para enmarcar el proceso de estimación del esfuerzo de desarrollo de software y de, por otro lado, estipular las propiedades que, racionalmente, debe satisfacer la relación que se establece entre el esfuerzo de desarrollo y el tamaño del software. Para evaluar el desempeño de MUREM éste se comparó con tres modelos de regresión los cuales se consideran como métodos importantes para estimar el esfuerzo de desarrollo de software. En esta comparación se aplicó una batería de hipótesis y pruebas estadísticas a doce muestras extraídas de bases de datos públicas bien conocidas. Estas bases de datos sirven como punto de referencia para la comparación de métodos para estimar el esfuerzo de desarrollo de software. En la experimentación se encontró que MUREM genera estimaciones puntuales del esfuerzo de desarrollo más precisas que aquellas obtenidas por los otros métodos. MUREM corrige la heterocedasticidad y aumenta la proporción de muestras cuyos residuales presentan normalidad. Con esto MUREM genera intervalos de confianza y de predicción más adecuados que aquellos obtenidos por los otros métodos. Un resultado importante es que los residuales obtenidos por el modelo de regresión de MUREM satisfacen la prueba de ruido blanco gaussiano de media cero, con lo que se prueba que el error de estimación de dicho modelo es aleatorio.<hr/>Abstract In this paper a multiplicative regression method to estimate software development effort is presented. This method, which we call MUREM, is a result of, on the one hand, a set of initial conditions to frame the process of estimating software development effort and, on the other hand, a set of restrictions to be satisfied by the development effort as a function of software size. To evaluate the performance of MUREM, it was compared with three regression models which are considered as important methods for estimating software development effort. In this comparison a battery of hypothesis and standard statistical tests is applied to twelve samples taken from well-known public databases. These databases serve as benchmarks for comparing methods to estimate the software development effort. In the experimentation it was found that MUREM generates more accurate point estimates of the development effort than those achieved by the other methods. MUREM corrects the heteroscedasticity and increases the proportion of samples whose residuals show normality. MUREM thus generates more appropriate confidence and prediction intervals than those obtained by the other methods. An important result is that residuals obtained by the regression model of MUREM satisfy the test for zero mean additive white gaussian noise which is proof that the estimation error of this model is random. <![CDATA[Delaunay Triangulation Validation Using Conformal Geometric Algebra]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400789&lng=en&nrm=iso&tlng=en Resumen Cuando la triangulación Delaunay se realiza en forma incremental, la etapa más importante, es la reconstrucción de los triángulos cuando se inserta aleatoriamente un nuevo punto en la red. Para ello existen diferentes técnicas, de la cual utilizaremos la validación del "círculo vacío" descrita por Boris Deloné, nuestro objetivo es utilizar el Álgebra Geométrica Conforme (AGC) para realizar dicha validación. Cambiaremos de ambiente matemático para demostrar las ventajas de las entidades geométricas que nos propone el AGC y emplearlas en un módulo que valide dicha triangulación.<hr/>Abstract When Delaunay triangulation is performed in an incremental fashion, different steps are involved in the process. Within those steps "reconstruction" is the most important stage when a new point is randomly inserted. Although there are several techniques to perform this reconstruction, one of the most relevant is a validation technique called "empty circle", described by Boris Deloné. In this paper, we focus on the use of the Conformal Geometric Algebra (CGA) to perform such validation. In addition, the proposal includes a mathematical environment change to show the advantages of using CGA's geometric entities and use them inside a module for validating the triangulation. <![CDATA[Aircraft Class Recognition based on Take-off Noise Patterns]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400799&lng=en&nrm=iso&tlng=en Resumen En este trabajo se examina el reconocimiento de la clase de aeronaves a partir de patrones del ruido en el despegue. Se analiza la segmentación de la señal en tiempo y el uso de una red neuronal MLP por cada segmento. Asimismo, se examinan varios algoritmos de decisión por comité para la agregación de las múltiples salidas de los clasificadores paralelos, así como la extracción y selección de características con base en el análisis del espectro del ruido de aeronaves. Por otro lado, se explora un método para estimar la trayectoria georreferenciada durante el despegue únicamente a partir de la señal. La metodología y los resultados están sustentados en la literatura actual.<hr/>Abstract In this work the aircraft class recognition of based on take-off noise patterns is examined. Signal segmentation in time is analyzed as well as using a MLP neural network as the classifier for each segment. Also, several algorithms for decision by committee in order to aggregate the multiple parallel outputs of the classifiers are examined along with feature extraction and selection based on spectrum analysis of the aircraft noise. Also, a method for georeferenced estimation of the take-off flight path based only on the noise signal is explored. The methodology and results are sustained in the current literature. <![CDATA[Challenges of Cyber Law in Mexico]]> http://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S1405-55462016000400827&lng=en&nrm=iso&tlng=en Resumen En los últimos años el mundo ha vivido grandes cambios en las telecomunicaciones, la competencia entre empresas, la transparencia de la información, la privacidad y protección de los datos personales, la seguridad de la información, los nuevos crímenes relacionados con la informática, el comercio electrónico, el gobierno electrónico, la propiedad intelectual y los derechos del autor, nombres de dominios, firmas electrónicas, certificación de documentos, protección del consumidor, acceso a la información, servicios en línea. La evolución en este sector ha sido mucho mayor que en otras áreas recientemente. El marco legal y regulaciones relacionadas con las tecnologías de la información y las comunicaciones, tratan de adecuarse a los nuevos cambios con la velocidad que esto implica. Esta investigación fomenta y ayuda a comprender el marco regulatorio con todo lo relacionado con la informática dentro de México y desde afuera, dando una ventaja competitiva para la comunidad académica y de negocios.<hr/>Abstract In recent years the world has experienced major changes in telecommunications, competition between enterprises, transparency of information, privacy and protection of personal data, information security, new crimes related to computer science, ecommerce, e-government, intellectual property and copyright, domain names, electronic signatures, document certification, consumer protection, access to information , services online. The evolution in this sector has been much higher than in other areas recently. The legal framework and regulations related to information communication technologies, try to adapt to new changes with the speed that this implies. This research encourages and helps to understand the regulatory framework with everything related to informatics sector within Mexico and from the outside, giving a competitive advantage to the academic community and business.