Scielo RSS <![CDATA[Computación y Sistemas]]> vol. 19 num. 2 lang. en <![CDATA[SciELO Logo]]> <![CDATA[<b>Editorial</b>]]> <![CDATA[<b>A Super-Resolution Image Reconstruction using Natural Neighbor</b> <b>Interpolation</b>]]> A super-resolution image reconstruction algorithm using natural neighbor interpolation is proposed and its performance is evaluated. The algorithm is divided into two stages: image registration and the reconstruction of a high-resolution color image. In the first stage, as shifts between images are usually unknown, the algorithm computes an approximation of these displacements by solving the system of linear equations proposed by Keren, Peleg, and Brada, then the pixels of all low-resolution images are mapped into a high-resolution grid by computing the new coordinates using the motion vectors. In the second stage, the pixel values that match the high-resolution grid are interpolated using natural neighbor interpolation which is a weighted average interpolation method for scattered data, based in the areas of the Voronoi polygons of the neighboring pixels. Finally, the proposed natural neighbor super-resolution algorithm is compared with some popular super-resolution algorithms presented in literature. <![CDATA[<b>Hierarchical Contour Shape Analysis</b>]]> This paper introduces a novel shape representation which performs shape analysis in a hierarchical fashion using Gaussian and Laplacian pyramids. A background on hierarchical shape analysis is given along with a detailed explanation of the hierarchical method, and results are shown on natural contours. A comparison is performed between the new method and our proposed approach using Point Distribution Models with different shape sets. The paper concludes with a discussion and proposes ideas on how the new approach may be extended. <![CDATA[<b>Morphological Filtering Algorithm for Restoring Images Contaminated by Impulse Noise</b>]]> In this paper a methodology to restore gray scale images with pixels polluted by random impulsive noise is presented. Noise is discovered using a criterion based on the white top-hat by reconstruction. Pixels detected as corrupted are restored using an iterative morphological algorithm built with extensive and antiextensive morphological transformations. The proposal is compared with the rank ordered mean filter (ROM) and other morphological transformations reported in the current literature. <![CDATA[<b>A Photometric Sampling Strategy for Reflectance Characterization and Transference</b>]]> Rendering 3D models with real world reflectance properties is an open research problem with significant applications in the field of computer graphics and image understanding. In this paper, our interest is in the characterization and transference of appearance from a source object onto a target 3D shape. To this end, a three-step strategy is proposed. In the first step, reflectance is sampled by rotating a light source in concentric circles around the source object. Singular value decomposition is then used for describing, in a pixel-wise manner, appearance features such as color, texture, and specular regions. The second step introduces a Markov random field transference method based on surface normal correspondence between the source object and a synthetic sphere. The aim of this step is to generate a sphere whose appearance emulates that of the source material. In the third step, final transference of properties is performed from the surface normals of the generated sphere to the surface normals of the target 3D model. Experimental evaluation validates the suitability of the proposed strategy for transferring appearance from a variety of materials between diverse shapes. <![CDATA[<b>Camera as Position Sensor for a Ball and Beam Control System</b>]]> This paper describes a novel strategy to use a digital camera as a position sensor to control a ball and beam system. A linear control law is used to position the ball at the desired location on the beam. The experiments show how this method controls the positioning of the ball in any location on the beam using a camera with a sampling rate of 30 frames per second (fps), and these results are compared with those obtained by using an analog resistive sensor with a feedback signal sampled at a rate of 1000 samples per second. The mechanical characteristics of this ball and beam system are used to simplify the calculation of the ball position using our vision system, and to ease camera calibration with respect to the ball and beam system. Our proposal uses a circularity feature of blobs in a binary image, instead of the classic correlation or Hough transform techniques for ball tracking. The main control system is implemented in Simulink with Real Time Workshop (RTW) and vision processing with OpenCV libraries. <![CDATA[<b>Recursive Median Filter for Background Estimation and Foreground Segmentation in Surveillance Videos</b>]]> El uso de cámaras de video es ampliamente usado en los sistemas de vigilancia, y ofrece la posibilidad de realizar el procesamiento de las imágenes capturadas para la detección automática de eventos de interés que se puedan presentar en la escena. El siguiente trabajo propone un método de estimación del fondo y segmentación del primer plano en videos de vigilancia, mediante el uso de un filtro mediana recursivo, con la aplicación de una ventana móvil temporal en la cantidad de fotogramas a analizar, que ofrezcan una mayor robustez frente al ruido causado por los cambios de iluminación y vibraciones de la cámara, limitando el incremento del costo computacional durante el procesamiento.<hr/>Video cameras are widely used in surveillance systems; this offers the possibility of processing the captured images for automatic detection of events of interest that may arise in the scene. The present paper proposes a method for estimating the background and foreground segmentation in video surveillance using a recursive median filter and applying a temporal moving window in the number of frames to be analyzed, which provide more robustness against noise caused by changes in illumination and camera shake, limiting the increase in the computational cost of processing. <![CDATA[<b>Characterization of Difficult Bin Packing Problem Instances Oriented to Improve Metaheuristic Algorithms</b>]]> En este trabajo se presenta una metodología para la caracterización de instancias difíciles del problema de Bin Packing usando Minería de Datos. El objetivo es que las características de las instancias proporcionen ideas para desarrollar nuevas estrategias para encontrar soluciones óptimas mediante la mejora de los algoritmos de solución actuales o mediante el desarrollo de nuevos. De acuerdo a la literatura especializada, en general, la caracterización de instancias ha sido utilizada para predecir qué algoritmo resuelve mejor una instancia o para mejorar el algoritmo asociando las características de la instancia con el desempeño de dicho algoritmo. A diferencia de los trabajos anteriores, este trabajo propone que el desarrollo de algoritmos de solución eficientes puede ser guiado por una previa identificación de las características que representan un alto impacto en la dificultad para obtener su solución. Para validar nuestro enfoque se utilizó un conjunto de 1,615 instancias, 6 algoritmos bien conocidos del problema de Bin Packing y 27 métricas iniciales. Después de aplicar técnicas de agrupamiento de Minería de Datos para la caracterización de las instancias, se encontraron 5 métricas que ayudaron a caracterizar 4 grupos con las instancias que no fueron resueltas por ninguno de los algoritmos usados en este trabajo. En base al conocimiento obtenido de la caracterización de las instancias, se propuso un nuevo método de reducción de instancias que contribuye a reducir el espacio de búsqueda de un algoritmo metaheurístico. Los resultados experimentales muestran que aplicando el método de reducción es posible encontrar más soluciones óptimas que las reportadas en el estado del arte por las mejores metaheurísticas.<hr/>This work presents a methodology for characterizing difficult instances of the Bin Packing Problem using Data Mining. Characteristics of such instances help to provide ideas for developing new strategies to find optimal solutions by improving the current solution algorithms or developing new ones. According to related work, in general, instance characterization has been used to make prediction of the algorithm that best solves an instance, or to improve one by associating the instance characteristics and performance of the algorithm that solves it. However, this work proposes the development of efficient solution algorithms guided by previous identification of characteristics that represent a greater impact on the difficulty of the instances. To validate our approach, we used a set of 1,615 instances, 6 well-known algorithms of the Bin Packing Problem, and 27 initial metrics. After applying our approach, 5 metrics were found relevant; these metrics helped to characterize 4 groups containing instances that could not be solved by any of the algorithms used in this work. Based on the gained knowledge from instance characterization, a new reduction method that helps to reduce the search space of a metaheuristic algorithm was proposed. Experimental results show that application of the reduction method allows finding more optimal solutions than those of best metaheuristics reported in the specialized literature. <![CDATA[<b>Improving the Multilayer Perceptron Learning by Using a Method to Calculate the Initial Weights with the Similarity Quality Measure Based on Fuzzy Sets and Particle Swarms</b>]]> The most widely used neural network model is Multilayer Perceptron (MLP), in which training of the connection weights is normally completed by a Back Propagation learning algorithm. Good initial values of weights bear a fast convergence and a better generalization capability even with simple gradient-based error minimization techniques. This work presents a method to calculate the initial weights in order to train the Multilayer Perceptron Model. The method named PSO+RST+FUZZY is based on the similarity quality measure proposed within the framework of the extended Rough Set Theory that employs fuzzy sets to characterize the domain of similarity thresholds. Sensitivity of BP to initial weights with PSO+RST+FUZZY was studied experimentally, showing better performance than other methods used to calculate feature weights. <![CDATA[<b>Evolutionary Multi-objective Optimization for Scheduling Professor Evaluations in Cuban Higher Education</b>]]> En Cuba, un control a clase es el proceso en el que se evalúa la calidad de la docencia impartida por el profesor universitario. La planificación de los controles a clase es realizada comúnmente por el jefe de departamento, el cual debe cumplir con varios criterios a la vez. Algunos de estos criterios son: la presencia en el tribunal de al menos un miembro del mismo colectivo y/o con categoría igual o superior que el profesor evaluado, la disponibilidad y nivel de utilización de los miembros del tribunal, entre otros. Esta tarea se torna compleja cuando se tienen en cuenta un gran número de profesores y controles al mismo tiempo. Con el objetivo de resolver esta problemática en el presente trabajo se propone un enfoque computacional basado en: 1) la modelación de este proceso como un problema de optimización multi-objetivo, y 2) su solución mediante una variante del conocido algoritmo evolutivo NSGA-II. Los resultados de los experimentos computacionales realizados muestran que nuestras propuestas ofrecen soluciones útiles y de calidad.<hr/>In Cuba, university professors are often evaluated during the development of a class. Scheduling such evaluations along the academic semester is a common task of the head of the department, who must satisfy several criteria at the same time. Examples of these criteria are the presence of at least one tribunal member of the same academic unit, the presence of at least one tribunal member with equal docent category, availability and the utilization level of the tribunal members, among others. However, scheduling professor evaluations can become a complex task if several professors and evaluations are considered. Aiming at solving this problem, in the present work we propose a computational approach based on 1) modelling this task as a multi-objective optimization problem and 2) solving this problem by adapting a variant of a very well-known evolutionary algorithm, NSGA-II. The results of the performed computational experiments show that our proposal contributes to obtaining useful quality solutions. <![CDATA[<b>Admission Control and Channel Allocation for Dynamic Spectrum Access using Multi-objective Optimization</b>]]> El incremento en el desarrollo de aplicaciones y su tiempo de uso, así como de nuevas tecnologías, han generado una mayor cantidad de transmisión de datos y demanda de los recursos finitos espectrales. Ello ha formado una idea errónea de que existe una escasez de espectro, sin embargo; diversos estudios han concluido que se trata de un problema de acceso al espectro ya que se ha observado que mientras ciertas bandas se encuentran saturadas otras presentan un uso escaso. En este contexto, el Acceso Dinámico de Espectro (DSA, por sus siglas en inglés Dynamic Spectrum Access) se propone como una solución para reciclar el espectro, compartiendo bandas de frecuencia entre las tecnologías y servicios inalámbricos que así lo requieran. El principal reto de la tecnología DSA, radica en que se les debe garantizar a los usuarios primarios (PU, usuarios con prioridad alta de acceso a un canal) protección contra la interferencia que pudieran generar los usuarios secundarios (SU, usuarios con prioridad baja de acceso a un canal). Una de las estrategias del DSA es la explotación simultánea de un canal tanto por el PU y por uno o más SU's, siempre y cuando, estos últimos no sobrepasen un umbral de la potencia de interferencia impuesto por el PU. Lo anterior limita el acceso a una cantidad excesiva de SU's a la red, de forma tal que se logre una coexistencia pacífica con los PU's presentes en el área de cobertura. Este trabajo propone un algoritmo multi-objetivo (MO) de control de admisión y asignación de canal para establecer el compromiso entre la máxima tasa de datos y el número máximo de SU's que pueden compartir concurrentemente un canal primario bajo restricciones de Calidad de Servicio (QoS, por sus siglas en inglés Quality of Service). Para resolver el problema de optimización se utiliza una estrategia híbrida consistente en la Optimización por Cúmulo de Partículas (PSO, por sus siglas en inglés Particle Swarm Optimization) y el Método MO de Suma Ponderada.<hr/>The growing development of applications, utilization time, technologies, and data rates are increasing the demands for and value of the finite spectral resources. It creates an idea of spectrum scarcity; however, several studies concluded that the shortage of the spectrum is a spectrum access problem since certain bands are used sporadically while in others the spectrum resource is scarce. In this context, dynamic spectrum access (DSA) is proposed as a solution to reuse spectrum sharing spectrum bands. Its main challenge is to guarantee protection against interference to primary users (PU, users with high priority to access a channel), when a frequency band is shared with secondary users (SU, users with low priority to access a channel). To achieve this, a DSA strategy is that a SU transmits simultaneously with the PU as long as the resulting interference is constrained. The aforementioned involves controlling the number of selected SUs to the network to assure a peaceful coexistence with the PUs in the area. This work proposes a multi-objective admission control and channel allocation algorithm to determine the tradeoff between the maximum data rate and the maximum number of selected SUs to concurrently share a spectral resource considering Quality of Service (QoS) constraints. To figure out the solution that considers the two conflicting objectives, Particle Swarm Optimization (PSO) and the Weighted Sum Method are applied <![CDATA[<b>Segmentation Strategies to Face Morphology Challenges in Brazilian-Portuguese/English Statistical Machine Translation and Its Integration in Cross-Language Information Retrieval</b>]]> The use of morphology is particularly interesting in the context of statistical machine translation in order to reduce data sparseness and compensate a lack of training corpus. In this work, we propose several approaches to introduce morphology knowledge into a standard phrase-based machine translation system. We provide word segmentation using two different tools (COGROO and MORFESSOR) which allow reducing the vocabulary and data sparseness. Then, to these segmentations we add the morphological information of a POS language model. We combine all these approaches using a Minimum Bayes Risk strategy. Experiments show significant improvements from the enhanced system over the baseline system on the Brazilian-Portuguese/English language pair. Finally, we report a case study of the impact of enhancing the statistical machine translation system with morphology in a cross-language application system such as ONAIR which allows users to look for information in video fragments through queries in natural language. <![CDATA[<b>Design of a General Purpose 8-bit RISC Processor for Computer Architecture Learning</b>]]> Computers are becoming indispensable for manipulating most everyday consumer products, ranging from communications and domestic electronics to industrial processes monitoring and control. High performance computer design is not only subject to the technology used for its implementation, it is also a matter of efficient training. The skills that must prevail in a good computer designer come from the type of courses taken and the tools employed during them. This work shows the design of an 8-bit RISC soft-core processor dedicated to a complete understanding of computer architecture. We consider this Processor an effective hands-on training solution for the comprehension of a computer from its lowest level up to testing. <![CDATA[<b>Identification of Harmonic Sources in Electrical Power Systems Using State Estimation with Measurement Error</b>]]> En este artículo se muestra que la Distorsión Armónica Total (THD) de la corriente es un índice confiable para identificar la ubicación de las fuentes armónicas en una red de potencia. Se prueba que los estimadores de armónicas fallan al identificar fuentes de armónicas al tener error en las mediciones. Se presenta un sistema de 14 nodos con dos fuentes de armónicas resuelto con dos métodos.<hr/>In this article we show that the Total Harmonic Distortion (THD) of the current is a reliable index to identify the location of harmonic sources in a power system. It is proved that the harmonic estimators fail to identify harmonic source measurement errors. A 14 node system with two sources of harmonic solved by two methods is tested. <![CDATA[<b>PID Control Law for Trajectory Tracking Error Using Time-Delay Adaptive Neural Networks for Chaos Synchronization</b>]]> This paper presents an application of Time-Delay adaptive neural networks based on a dynamic neural network for trajectory tracking of unknown nonlinear plants. Our approach is based on two main methodologies: the first one employs Time-Delay neural networks and Lyapunov-Krasovskii functions and the second one is Proportional-Integral-Derivative (PID) control for nonlinear systems. The proposed controller structure is composed of a neural identifier and a control law defined by using the PID approach. The new control scheme is applied via simulations to Chaos Synchronization. Experimental results have shown the usefulness of the proposed approach for Chaos Production. To verify the analytical results, an example of a dynamical network is simulated and a theorem is proposed to ensure the tracking of the nonlinear system. <![CDATA[<b>A Counting Logic for Trees</b>]]> It has been recently shown that the fully enriched µ-calculus, an expressive modal logic, is undecidable. In the current work, we prove that this result does not longer hold when considering finite tree models. This is achieved with the introduction of an extension of the fully enriched µ-calculus for trees with numerical constraints. Contrastively with graded modalities, which restrict the occurrence of immediate successor nodes only, the logic introduced in this paper can concisely express numerical constraints on any tree region, as for instance the ancestor or descendant nodes. In order to show that the logic is in EXPTIME, we also provide a corresponding satisfiability algorithm. By succinct reductions to the logic, we identify several decidable extensions of regular tree languages with counting and interleaving operators. It is also shown that XPath extensions with counting constructs on regular path queries can be concisely captured by the logic. Finally, we show that several XML reasoning problems (XPath queries with schemas), such as emptiness and containment, can be optimally solved with the satisfiability algorithm.